New research has shown that cancer cells that survive treatment may effectively wake themselves up and resume growing by tapping into the body’s immune system as a power source. This work provides more insight into the puzzle of why, and how, cancer relapses, and has provided key insights into why the immune system may fail to fight cancer effectively. The research also revealed that, by restoring the body’s immune response to proper functioning, immunotherapy might prove effective at preventing relapse of the deadly disease.
The research question for this team was how a relatively tiny number of treatment-resistant cancer cells could become a deadly occupying force after remaining dormant for stretches of time. In an attempt to answer this question, the team studied immune responses in mice. They were watching to see how immune system cells operate before treatment for cancer, after treatment was declared a success, and once tumors returned.
Normally, immune cells trigger inflammation by releasing signals when they sense infection or trauma. This can, in some cases, help the immune system kill cancer cells. However, post-treatment cancer cells subvert these signals. In the study, researchers targeted the response to the post-treatment cells using immunotherapies, and were able to delay — or in some cases prevent — relapse.
The key discovery was that a chemical signal called TNF-alpha can switch roles: it can either serve as an anti-tumor agent, supporting the immune response to cancer cells, or as a new type of relapse promoting signal. TNF-alpha was also able to trigger the resistant cells, prompting them to influence cells that kill cancer naturally not to attack.
Cells Ignoring the Problem
Specifically, the team found that the surfaces of resistant cancer cells had high levels of a certain molecule, PD-L1, on their surface. The PD-L1 on those cells can act with other PD-L1 on T cells, instructing them to ignore signals. These results suggest that, after surviving treatment, cancer cells wake themselves up and start growing again using the patient’s immune system. Once this mechanism is better understood, immunotherapy could provide potential treatment for humans. As such, this is the next area of research the team plans to pursue.
“Our study finds the body’s own immune system seems to play a crucial role when cancer relapses,” co-author Professor Alan Melcher said in a press release. “The immune system goes from keeping cancer cells in check to awakening and feeding residual cells, while turning a blind eye to their growth.”
“Excitingly, many of the methods employed by treatment-resistant tumours to re-grow and hide from the immune system can be blocked using existing immunotherapies.” The researchers also added that, based on recent supporting data from clinical trials, the potential of immunotherapy to reduce cancer relapse is good.
Co-author Kevin Harrington added: “It is becoming increasingly clear that the immune system is at the core of the puzzle of how we can treat cancer more effectively.”
This discovery is the kind of groundbreaking achievement that only happens a few times in science. Take a moment to understand exactly why gravitational waves were, and are, such a revolutionary discovery.
An international team of scientists has come up with a new explanation for why space occupies three dimensions. The theory hinges upon flux tubes, which are comprised of quarks held together by gluons.
In normal circumstances, a flux tube linking a quark with an antiquark will disappear when the two particles make contact and annihilate one another. However, flux tubes can stabilize and form knots in certain circumstances, like if they become interlinked with one another. If the early universe was filled with such a network of flux tubes, the amount of energy contained within the structure could explain cosmic inflation, or the exponential expansion of the early Universe.
However, it’s thought that three-dimensional knots like the ones formed by flux tubes will unravel in higher dimensional spaces. As a result, cosmic inflation would have to have been limited to three dimensions, with any further dimensions being small beyond perception, if they do exist.
At this point, the theory is just a model for what could have happened during the earliest period of our universe’s formation. However, it could explain some burning questions about how the cosmos evolved into its current form. Researchers will continue to develop this concept and how we might better understand this cosmic history with the goal of making predictions about the universe that can be tested, in order to verify its accuracy.
A team of researchers at Case Western Reserve University School of Medicine has identified a conserved molecular pathway that controls health and lifespan in nematode worms (Caenorhabditis elegans) — a frequently studied model organism in biological research.
“We find that by artificially increasing or decreasing the levels of a family of proteins called Kruppel-like transcription factors (KLFs), we can actually get C. elegans to live for longer or shorter time periods,” study first author Dr. Nelson Hsieh said to Sci News. “Since this same family of proteins also exists in mammals, what is really exciting is that our data suggests KLFs also have similar effects on aging in mammals, too.”
Animal models are essential to biological research. To study a human disease or process you can’t perform most preliminary work on humans for ethical reasons. Therefore, you must instead develop a model. While some models can stay in vitro (outside a living body) eventually you must be able to produce the disease in a context that allows study — e.g. in a living body. Animal models, such as nematode worms and mice, enable a much closer approximation to a human response to disease and other processes.
Mice share 99 percent of their DNA with humans, and have short lifespans compared to humans and other mammals. Therefore, studies with mice involving “knockout” genes allows researchers to observe the effects of removing single genes. C. elegans is also useful, especially in aging research, for several reasons. For one, the worms have a short three day lifecycle, allowing researchers to observe many generations quickly. They are very simple organisms, which makes the effects of single proteins much easier to see. C. elegans’ genome was sequenced by the end of 1998, so scientists have had ample opportunities to experiment with this relatively short, but very well-known, genome.
Lessons Applied to Humans
The research also demonstrated that C. elegans with high levels of KLF proteins lived healthier, longer lives than normal worms. Similarly, mice with elevated levels of KLF proteins were found to exhibit a delay in age-related blood vessel dysfunction. The team determined that the KLF proteins’ function is to control autophagy — a quality control mechanism that allows cells to clear up debris such as molecular byproducts and misfolded proteins that build over time, reaching critical mass in old age. Loss of this process of cleaning and recycling is a classic sign of aging.
Cells are less able to undertake these recycling functions as they age. Once an unsustainable level of toxic debris builds up, cellular survival is threatened. This is why the team observed worms without KLF proteins dying early, as their cells were no longer able to maintain autophagy. From here, the researchers plan to study how autophagy affects blood vessel function, and develop strategies for targeting KLF proteins in humans.
“The observation that KLF levels decrease with age and that sustained levels of KLFs can prevent the age-associated loss of blood vessel function is intriguing given that vascular dysfunction contributes significantly to diverse age-associated conditions such as hypertension, heart disease, and dementia,” senior author Professor Mukesh K. Jain said to Sci News.
“As our population ages, we need to understand what happens to our heart and arteries, as we rely on them to function perfectly later and later on in our lives,” Dr. Hsieh added to Sci News. “Our findings illuminate what can happen during aging, and provide a foundation to designing interventions which slow these processes.”
Intel has announced that it has successfully fabricated a 17-qubit superconducting test chip for quantum computing. The superconducting chip has been submitted to the company’s quantum research partner QuTech for further testing.
Quantum computing has the potential to be a truly revolutionary technology, providing a currently unprecedented amount of computational power. However, the qubits that underlie the hardware are notoriously fragile — Intel’s implementation requires an operating environment that maintains a temperature of 20 millikelvin to function.
The new chip boasts an improved design that provides better reliability, enhanced thermal performance, and a reduction of the amount of radio frequency interference between qubits. It also introduces a scalable interconnect scheme that makes it possible to exchange between 10 and 100 times more signals in and out of the chip, compared to a wirebonded alternative.
Crucially, Intel has employed processes, materials, and designs that will allow Intel to scale up its packagine for quantum integrated circuits, which are much larger than a standard silicon chip. This could prove to be an important step in moving from the production of components to a fully fledged quantum computer.
Next Generation of Superconducting
“Our quantum research has progressed to the point where our partner QuTech is simulating quantum algorithm workloads, and Intel is fabricating new qubit test chips on a regular basis in our leading-edge manufacturing facilities,” commented Dr. Michael Mayberry, the corporate vice president and managing director of Intel Labs, in a press release.
While the field of quantum computing has made significant advances in recent years, there is still plenty of work to be done before a large-scale universal quantum computer is viable. Intel is keeping its options open, continuing research into spin qubits in silicon even while focusing on superconducting qubits.
Theoretical research has propelled quantum computing forward by leaps and bounds over the past decade, but Intel’s investigations into the practical side of fabrication are essential for the next stage of the process.
“This discovery is significant because it gives us two completely different ways to view the same thing,” Brian Koberlein, astrophysicist and Senior Lecturer at Rochester Institute of Technology, explained to Futurism. “Imagine trying to follow a detective show if you could just hear the show without video, or just see the show without hearing it. When you combine the two you get the whole picture.”
Gravitational waves are ripples in spacetime that were first theorized by Albert Einstein about a century ago. Last year marked the first time we’ve been able to observe the phenomena, and so far, all the gravitational waves we’ve observed have been caused by particularly intense events, such as the merging of neutron stars. This type of merger has also been considered the most likely cause of short gamma ray bursts, cosmic explosions that emit massive amounts of light.
Scientists have long suspected that the merger of two neutron stars would be followed by a kilonova, an explosive occurrence some 1,000 times brighter than a standard nova. They speculated that the near-simultaneous observations of gravitational waves and a gamma ray burst could indicate that such an event was taking place, and subsequent study by ESO facilities indicates that this was the case: a kilonova had finally been observed.
“I had never seen anything like it,” Stephen Smartt, who led observations with the ESO’s New Technology Telescope, said in an ESO press release. “Our data, along with data from other groups, proved to everyone that this was not a supernova or a foreground variable star, but was something quite remarkable.”
No single facility was responsible for this breakthrough discovery — it was truly a collaborative effort. A large swathe of sky containing millions of stars needed to be monitored after the initial detection of the gravitational waves and light, so a host of different facilities were called into action.
Paranal Observatory’s Visible and Infrared Survey Telescope and VLT Survey Telescope, La Silla Observatory’s Rapid Eye Mount telescope, Las Cumbres Observatory’s LCO 0.4-meter telescope, and Cerro Tololo Inter-American Observatory’s Dark Energy Camera were all used to observe further developments.
The Swope 1-meter telescope at Cerro Las Campanas was the first to spot a new point of light appearing close to the lenticular galaxy designated NGC 4993. This point of light was monitored by the Pan-STARRS telescope and the Subaru telescope in Hawaii overnight. Even more observatories and telescopes would study the event over the following couple of weeks.
“ESO’s great strength is that it has a wide range of telescopes and instruments to tackle big and complex astronomical projects, and at short notice,” said Andrew Levan, lead author of one of the papers. “We have entered a new era of multi-messenger astronomy!”
This interview has been slightly edited for clarity and brevity.
Quantum theory predicts entanglement; that huge numbers of atoms can be intertwined due to quantum forces, across distances, or inside macroscopic structures. However, “predicts” has been the key phrase up until recently — as actual hard evidence from experiments has been lacking. Experimental evidence was just presented by University of Geneva scientists, who demonstrated the entanglement of 16 million atoms in a one-centimeter crystal.
Achieving entanglement hasn’t been the real challenge for physicists looking to generate empirical proof of the concept, though. Researchers can generate entangled photons by splitting a photon. It is the observation and recording of entanglement that has proven next to impossible — until now. With one caveat, as explained by UNIGE applied physics group researcher Florian Fröwis explained in a press release about the team’s research. “But it’s impossible to directly observe the process of entanglement between several million atoms since the mass of data you need to collect and analyze is so huge.”
Therefore, Fröwis and his team took inventory of which measurements they were able to take, and of those, which might be able to generate the evidence they were searching for. They settled on the single direction of light re-emitted by the crystal, and analyzed its statistical properties. This was how the the team was able to show the entanglement of 16 million atoms, rather than a few thousand.
Quantum networks will be essential to data protection in the future, because they make it possible to send a signal and detect any interception of that signal by a third part immediately. To send and receive these kinds of signals, you need quantum repeaters which can unify entangled atoms with a strong quantum relationship despite being separated by great distances. These quantum repeaters house crystal blocks supercooled to 270 degrees below zero and enriched with rare earth atoms. Once these blocks are penetrated by a photon, entanglement is created.
Particle entanglement is at the heart of the coming revolutions in quantum computing and quantum encryption, which will themselves be driving everything from artificial intelligence to personalized medicine. And while this is high-level stuff, it all depends on the entanglement of atoms at the quantum level, which this research has demonstrated on an unprecedented scale.
Chemists from The Scripps Research Institute (TSRI) have demonstrated a way to create new materials from DNA. Using recent advances in editing techniques, researchers Floyd Romesberg and Tingjian Chen were able to chemically modify DNA nucleotides to produce substances with qualities that could be useful for medical purposes. Their work has been published in Angewandte Chemie.
Previously, Romesberg’s laboratory had successfully produced an artificial DNA polymerase enzyme capable of making copies of modified DNA in the same way that a standard polymerase makes copies of normal DNA. However, in that case, the researchers were only modifying the DNA by attaching fluorine or methoxy moieties to the nucleotides’ sugar backbone. This new study demonstrates several further modifications, each of which offers up its own potential applications.
One of the modifications carried out by the chemists was the addition of an azido group to the DNA. Using a series of techniques known as click chemistry, the researchers could then attach various other molecules to the DNA via this azido group.
A more advanced implementation affixed multiple strands of DNA to one central strand that had been modified with an azido group. That DNA was then amplified using a method known as polymerase chain reaction, and when the resulting structure was put in contact with water, it formed a hydrogel.
A Happy Surprise
The researchers weren’t expecting this hydrogel to form, but they were pleased to see that it did.
Hydrogels have a host of potential applications, ranging from drug delivery to the cultivation of cells in a three-dimensional culture. However, they’re difficult to produce in a controlled manner. The techniques being developed at TSRI could change all that.
Next, the researchers plan to further investigate how to put substances like these hydrogels to good use. They also plan to develop further DNA modifications that could easily be reproduced using the polymerase.
As TSRI’s Floyd Romesberg, a professor of chemistry at the institution, noted in a TSRI press release, “DNA has some unique properties as a material, and with this new ability to modify it and replicate it like normal DNA, we can really begin to explore some interesting potential applications.”
Earlier this month, NASA issued a press release stating that it’s likely that our solar system has a ninth planet—even if it’s proving difficult to find.
The planet could have a mass ten times that of Earth’s, and be situated twenty times as far from the sun as Neptune. It’s being referred to as “Planet Nine,” and while it’s very difficult to procure clear evidence of its existence, some scientists are absolutely convinced that it’s out there.
“There are now five different lines of observational evidence pointing to the existence of Planet Nine,” said Konstantin Batygin, a planetary astrophysicist at the California Institute of Technology (Caltech), who is part of a team on the search for the planet.
“If you were to remove this explanation and imagine Planet Nine does not exist, then you generate more problems than you solve. All of a sudden, you have five different puzzles, and you must come up with five different theories to explain them.”
In a 2016 paper, Batygin and co-author Mike Brown detailed six known objects in the Kuiper Belt that behave rather strangely. All of them have elliptical orbits pointing in the same direction, and all of those orbits are tilted the same way. Both of these traits serve as clues to the presence of Planet Nine.
Computer simulations that took the hypothesized planet into account indicated that there should also be several other bodies with more extreme tilts from the solar plane, on the order of 90 degrees. Brown realized that astronomers are already aware of five such objects, meaning nature did fit with the simulations.
Planet Nine would also explain why the plane in which the other planets orbit is tilted about six degrees away from the sun’s equator: over time, Planet Nine’s distant gravity has made the entire solar system plane wobble away from center.
Finally, there are the objects in the Kuiper belt that orbit in the opposite direction of everything else in our solar system.
“No other model can explain the weirdness of these high-inclination orbits,” explained Batygin. “It turns out that Planet Nine provides a natural avenue for their generation. These things have been twisted out of the solar system plane with help from Planet Nine and then scattered inward by Neptune.”
Based on the behavior of these distant objects, the astronomers believe the planet to be a Super-Earth, a massive rocky planet that is extremely common in the universe—but which our solar system, oddly, lacks. The planet could have coalesced out in the cold reaches of our system over millions of years, formed close to the sun and then been flung outward, or even been captured by the Sun from another system.
Batygin and Brown are using the Subaru Telescope at Hawaii’s Mauna Kea Observatory to continue their search for Planet Nine. According to Batygin, this telescope is the best tool available to hunt down something dim and distant in the vast expanse of sky.
However, Planet Nine isn’t the only explanation for the orbital behaviors observed. A recent survey of the outer solar system found over 800 trans-Neptunian objects. A random distribution of this matter could also potentially have the same effect on the tilt on the traits observed in various orbits—but the jury is still out.
Engineers at Cornell University have developed a programmable synthetic skin inspired by the amazing ways octopus and cuttlefish are able to blend into their environment. The project spawned a stretchable material that’s capable of morphing into a variety of 3D shapes.
The pneumatically-activated material draws inspiration from the papillae that cephalopods use to camouflage. These papillae are muscular hydrostats with no skeletal support—much like the human tongue. The research team looked at these structures to create synthetic tissues capable of similar shape-shifting abilities.
The result is a synthetic material that can extend and retract to form a wide variety of 3D shapes. However, while the team was influenced by camouflage techniques, the project has a much broader range of uses.
Morph and Stretch
“Engineers have developed a lot of sophisticated ways to control the shape of soft, stretchable materials, but we wanted to do it in a simple way that was fast, strong, and easy to control,” said James Pikul, the lead author of the paper, in a press release. “We were drawn by how successful cephalopods are at changing their skin texture, so we studied and drew inspiration from the muscles that allow cephalopods to control their texture, and implemented these ideas into a method for controlling the shape of soft, stretchable materials.”
The engineers behind the project have indicated that the synthetic skin might offer up some important advantages in scenarios where temperature control is important. The material could be programmed such that its 2D configuration reflects light, while its 3D arrangement absorbs it, regulating or manipulating the temperature as needed.
The ability to quickly switch between a flat 2D surface and a bumpy 3D exterior could also be useful in objects that need to pass through water or air. Changing the amount of drag generated by the material might be an effective way of regulating speed. This is one of the main ways that cephalopods use their papillae—forming shapes to serve as camouflage while they remain very still, then quickly transitioning to a smooth surface so they are as hydrodynamic as possible for a quick escape.
Particle physics is in some ways like peeling an onion; you can peel back each layer, going smaller and smaller, until you are studying the tiniest of particles. Interestingly, it’s these minuscule particles that offer physicists the most insight into the secrets of our vast universe. Every atom has a nucleus at its center, and inside nuclei are nucleons: protons and neutrons. Even smaller particles, quarks and gluons, comprise nucleons, each of which has its own intrinsic spin.
To understand how chemical and physical processes work, it is important to know how these elementary particles spin. The fundamental properties of any given material are controlled by spin, so how a material behaves at different temperatures, its conductivity, and other properties all have to do with the spin of these tiny particles.
Since 1987, the physics community has been stumped by the “proton spin crisis.” Although older theoretical models attributed all nucleon spin to the quarks that comprise the nucleon, research from CERN, SLAC, and DESY revealed that only 30 percent of proton spin can be attributed to quarks.
As scientists have been working to identify the source of the rest of the spin they’ve long chalked it up to relativistic quantum effects without being able to describe the process in any more detail — until now.
New Spin on an Old Physics Problem
Researchers have, at last, deciphered the nucleon’s spin — including how the various particles that make up the nucleon contribute to it. The team used the CSCS supercomputer Piz Daint to calculate the formerly mysterious quantum effects, and parse out the relative contributions to nucleon spin constituent gluons, quarks, and sea quarks each make. Sea quarks are an intermediate state of quark-antiquark pairs that exist inside the nucleon, albeit briefly.
The researchers first assessed the quarks’ true physical mass to accurately calculate their spin. This is easier said than done, since individual quarks and gluons cannot be isolated, as they are held together by the strong force — one of the four fundamental forces of physics. The team got around this issue by fixing the mass of up and down quarks based on the mass of the pion, which is a meson made up of one up and one down antiquark. Additional challenges included reducing statistical errors in calculating the spin contributions, and converting dimensionless values from simulations into measurable physical values. The Piz Daint supercomputer was essential to each of these processes.
“Making powerful supercomputers like Piz Daint open and available across Europe is extremely important for European science,” DESY-Zeuthen lead scientist and co-author Karl Jansen told Phys.org. “Simulations as elaborate as this were possible only thanks to the power of Piz Daint, and because we pre optimised our algorithms to make best use of the graphics processors of the machine,” University of Cyprus and the Cyprus Institute professor of physics and project leader Constantia Alexandrou added in the same report.
On Monday, October 16, the National Science Foundation (NSF) will host an event at the National Press Club in Washington, DC featuring researchers from the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations, along with scientists from approximately 70
observatories from around the world. Journalists are also invited to attend the event, which is intended to be the global reveal for new findings on gravitational waves.
First the scientists will discuss the new findings, which are from LIGO, Virgo, and various other observatories from all over the world. Next, telescope teams studying extreme cosmic events in partnership with the LIGO and Virgo collaborations will discuss their recent findings. The event will begin for the press and public at 10:00 a.m., EDT.
On September 14, 2015 the LIGO team first detected gravitational waves, a discovery that they announced in February of 2016. Gravitational waves are created (among other things) by the compacting and releasing of the fabric of spacetime as two black holes orbit each other in a dance of death. The first observed event confirmed Einstein’s general theory of relativity, via which he posited spacetime as a singular and unitary phenomenon, and was a milestone in astronomy and physics that would usher in a new field of gravitational-wave astronomy. Three more detections were confirmed since then, the most recent of which was the first joint LIGO and Virgo detection.
Solving Time-Old Mysteries
Physicists from the LIGO project were recently awarded the Nobel for their work with gravitational waves. Their work detecting gravitational waves has permanently changed astronomy and physics, and not simply because it confirms Einstein’s theory of relativity. The detection of the waves will also offer insight into how the universe is expanding — insight that could never have been accessed without otherwise appealing to dark matter, a term that is ultimately a placeholder for a massive force of we-know-not-what that has long eluded the scientific community. Gravitational wave research is also likely to reveal the nature of dark matter.
Event organizers are asking journalists who wish to attend the event to RSVP as soon as possible to firstname.lastname@example.org, and no later than noon EDT Friday, October 13. The National Press Club is located in Holeman Lounge at 529 14th St. NW, 13th Floor, in Washington, DC.
New research from scientists at the Zuckerman Institute of Columbia University has turned our classical understanding of the way the human brain perceives and recalls on its head. The work shows that when the brain observes it first processes details to construct internal models of more complex objects. However, when it recalls, the brain first thinks of those complex representational models, and then goes back to the details it first perceived to create them. The research relied upon Bayes theorem and other mathematical modeling, and may have practical applications in many places — from evaluating testimony in the courtroom to treating people with sensory processing differences such as those with autism.
Lacking direct evidence, scientists have always assumed that perception and decoding followed the same hierarchy: from details to complex objects. This research proves that this assumption was incorrect as to the decoding process that takes place during recall.
The team unraveled the decoding hierarchy of the human brain by focusing on simple recall tasks that could be clearly interpreted. In the first task, subjects had half a second to view a line angled at 50 degrees on a computer screen. After it disappeared, they moved two dots on the screen to approximate the angle they remembered, and repeated this task 50 times. The second task was identical, except the angle of the line was 53 degrees. In the third task, the subjects saw both lines at the same time, and then tried to match pairs of dots to each angle.
“Memories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line,” Mortimer B. Zuckerman Mind Brain Behavior Institute neuroscientist and study principal investigator Ning Qian said in a press release. However, that wasn’t what happened, and several other findings also contrasted with traditional models.
Explaining the Process
The authors proposed that context is more important than details are in everyday life, so reverse decoding makes sense. For example, when we see a new face, their expression — such as anger or friendliness — is what really matters, and we only focus on details such as the shape of their features later, if need be, and we do so by estimating. “Even your daily experience shows that perception seems to go from high to low levels,” Dr. Qian said in the press release.
The team then created a mathematical model to explain what they believe happens in the brain using Bayesian inference, with the higher-level complex features as the prior information in the statistical model for decoding lower-level features, rather than the details being used to decode or recall the bigger picture. The model’s predictions were a good fit for the behavioral data.
Moving forward, the researchers plan to apply their work in studies of long-term memory, not just simple perception. This could have major implications in many areas, from assessing the credibility of witnesses in court and treating people with sensory processing issues to assessing the credibility of presidential candidates. It could even help computer scientists study the progression of microchips that rival the power of the human brain, as they begin to possess similar perceptual acuity.
Moments ago, the European Southern Observatory (ESO) announced that they made a revolutionary discovery, one that they will be unveiling to the world on Monday (October 16th). According to the media advisory released today by the ESO, scientists have observed an astronomical phenomenon that has never been witnessed before.
Beyond that, no information is available regarding this most recent announcement.
The last time that astronomers unveiled a groundbreaking discovery of this nature was when scientists working at LIGO (the Laser Interferometer Gravitational-Wave Observatory) detected gravitational waves. Ultimately, the find ushered us into a new era in astronomy, allowing us to see the universe as never before.
To clarify, before this detection, we were only able to perceive the cosmos through observations of electromagnetic radiation—through gamma rays, x-rays, visible light, and so on. Thanks to the LIGO discovery, we can now observe the very ripples of spacetime itself.
Of course, there are a number of mysteries that scientists have yet to explain in relation the origins and evolution of the cosmos. As such, it is difficult to pin down the specific nature of this observation—perhaps scientists finally observed dark energy, the mysterious force that is thought to make up approximately 73 percent of the universe, or perhaps it is a discovery that scientists never before fathomed. Stay tuned.
A team of researchers from Boston University (BU) has explored the possibility of enhancing a person’s ability to learn and control their behavior — in short, to change how people think — by stimulating the brain. BU researcher Robert Reinhart used a new form of brain stimulation, called high-definition transcranial alternating current stimulation (HD-tACS), to “turbo charge” two brain regions that influence how we learn.
“If you make an error, this brain area fires. If I tell you that you make an error, it also fires. If something surprises you, it fires,” Reinhart said in a BU Research press release, referring to the medial frontal cortex, which he calls the “alarm bell of the brain.”
Reinhart and his colleagues found that stimulating this region, as well as the lateral prefrontal cortex, could change how a person learns. “These are maybe the two most fundamental brain areas involved with executive function and self-control,” he added.
In a study published in the journal of the Proceedings of the National Academy of Sciences (PNAS), Reinhart’s team described how applying electrical stimulation using HD-tACS quickly and reversibly increased or decreased a healthy person’s executive function, which led to a change in behavior.
Reinhart’s team tested 30 healthy people, each wearing a soft cap with electrodes that conveyed the stimulation. The test was simple: each subject had to press a button every 1.7 seconds. In the first three rounds of tests, the researchers either cranked up the synchronicity between the two lobes, disrupted it, or did nothing.
The participants’ brain activity, monitored with an electroencephalogram (EEG), showed statistically significant results. When the brain waves were upped, the subjects learned faster and made fewer mistakes, which they corrected abruptly. When it was disrupted, they made more errors and learned more slowly.
What was even more surprising was when 30 new participants took an adjusted version of the test. This group started with their brain activity temporarily disrupted, but then received stimulation in the middle of the activity. The participants quickly recovered their original brain synchronicity levels and learning behavior. “We were shocked by the results and how quickly the effects of the stimulation could be reversed,” says Reinhart.
Although their study still leaves much to learn, the BU team was actually the first to identify and test how the millions of cells in the medial frontal cortex and the lateral prefrontal cortex communicate with each other through low frequency brain waves. “The science is much stronger, much more precise than what’s been done earlier,” said David Somers, a BU brain sciences and psychology professor who wasn’t part of the study.
The bigger question, Somers noted, is how far a person can go with such a technology. Who doesn’t want to have their brain performance enhanced? This could produce the same effects as nootropics or smart drugs, but with fewer potential side effects, as the brain is stimulated directly. Having access to such a technology could be a game changer — but just as with smart drugs, there’s the question of who should have access to such a technology.
Quantum dots, currently used in place of organic dyes in various experiments using photoelectronics to trace the ways that drugs and other molecules move through the body, may have a new supportive role in healthcare. Scientists have engineered quantum dot nanoparticles that produce chemicals that can make bacteria more vulnerable to antibiotics. This will hopefully be a step forward in the fight against drug-resistant pathogens, like superbugs, and the infections they cause.
In this study, antibiotics empowered by the experimental quantum dots were 1,000 times more effective at fighting off bacteria than antibiotics alone. The quantum dots used were about the width of a strand of DNA, 3 nanometers in diameter. The dots were made of cadmium telluride, a stable crystalline compound often used in photovoltaics. The electrons of the quantum dots react to green light at a particular frequency, causing them to bond with oxygen molecules in the body and form superoxide. Bacteria that absorbs the substance are unable to fend off antibiotics, as their internal chemistry falls out of balance.
The team mixed different quantities of quantum dots into varying concentrations of each of five antibiotics to create the range of samples for testing. They then added these test samples to five strains of drug-resistant bacteria, including methicillin-resistant Staphylococcus aureus, also known as MRSA, and Salmonella. In the 480 tests with different quantum dots/antibiotics/bacteria combinations, more than 75 percent of the samples with quantum dots were able to curb bacterial growth or kill the bacteria entirely with lower doses of antibiotics.
According to the World Health Organization (WHO), antibiotic resistance is among the most serious threats to food security, health, and development in the world. It can affect anyone in any country, and infections like gonorrhea, pneumonia, and tuberculosis that were once simple to treat are now becoming increasingly difficult to manage due to antibiotic resistance. Aside from obvious health and even mortality risks, antibiotic resistance also causes higher medical costs and longer hospital stays. And, although some level of antibiotic resistance occurs naturally as bacteria adapt to survive, misuse of antibiotics in both animals and humans is drastically accelerating this process.
In the US alone, at least 2 million people are affected by antibiotic resistance every year. And, if nothing is done to combat this problem, by 2050 antibiotic resistance will kill more than 10 million people. Researchers are working on a variety of techniques to overcome this challenge. Some are using CRISPR to attack the bacteria directly, while others are looking for answers by studying the ways that ants battle detrimental fungi. Meanwhile, scientists are searching for the genetic origins of antibiotic resistance and some are even battling resistance by studying the ways bacteria behave in space.
One limitation of this study’s use of quantum dots is the light that activates the process; it has to come from somewhere, and it can only radiate through a few millimeters of flesh. For now, these quantum dots would really only be useful for surface issues. However, the team is also working to develop nanoparticles that absorb infrared light instead, as infrared light can pass through the body and could be used to treat bone and deep tissue infections.
New research shows how, with the addition of a bacterial gene, corn’s nutritional value can be efficiently enhanced. The gene enables corn, the largest commodity crop in the world, to produce methionine, a key amino acid essential for tissue repair and growth. By producing a staple crop that contains methionine, which is found in meat, millions of people all over the world who can’t afford to eat meat could improve their health through nutrition. This genetically engineered corn crop could also dramatically reduce worldwide animal feed costs.
Methionine is one of nine amino acids that are essential to human health. In addition to supporting tissue repair and growth, it strengthens nails and improves the skin’s flexibility and tone. Methionine also contains sulfur which aids cells in absorbing zinc and selenium, and guards against both pollution and premature aging. Amino acids occur in our food, so nutritionally inadequate diets often lack sufficient amounts of one of more of these critical compounds.
Animals, including livestock, also need methionine. This means that billions of dollars’ worth of methionine must be added to field corn seed annually, since corn lacks the amino acid in nature. For example, according to the study, chicken feed is typically made up of corn and soybeans, so it is typically lacking methionine, the essential sulfur-containing amino acid.
“It is a costly, energy-consuming process,” Waksman Institute of Microbiology director and study senior author Joachim Messing said in a press release. “Methionine is added because animals won’t grow without it. In many developing countries where corn is a staple, methionine is also important for people, especially children. It’s vital nutrition, like a vitamin.”
Adding Methionine to Corn
Within this study, researchers inserted a gene from the E. coli bacterium into the genome of the corn plant and then produced several generations of the modified corn. The E. coli enzyme — 3?-phosphoadenosine-5?-phosphosulfate reductase (EcPAPR) — spurred methionine production in the leaves of the plant rather than throughout the plant. This was an intentional choice, with the aim of avoiding an accumulation of toxic byproducts. It was enough to prompt a 57 percent increase in methionine in the corn kernels, and observations of chickens who ate the corn as part of a feeding trial showed that the modified plant was nutritious.
“To our surprise, one important outcome was that corn plant growth was not affected,” Rutgers University-New Brunswick Department of Plant Biology professor and study co-author Thomas Leustek said in the press release. This will be a tremendous boon to subsistence farmers in the developing world, Leustek pointed out: “Our study shows that they wouldn’t have to purchase methionine supplements or expensive foods that have higher methionine.”
This is another example of the ways that genetically modified foods can actually be helpful from a public health perspective, despite the generally negative reputation they endure. Scientists are focusing on food to help farmers grow more food efficiently and with a decreased environmental impact. The important thing to note from this research is that we should remain vigilant about the long-term effects of our actions, and not let blanket fears stand in the way of progress.
South Korea is in the midst of developing a bomb that can target electrical grids to essentially render an area’s electronics utterly useless.
Known as “blackout bombs,” these devices are designed to be dropped by a plane onto key power stations. Upon impact, they release carbon graphite filaments capable of short-circuiting and disrupting any nearby hardware that relies on electricity.
Blackout bombs were first used by the U.S. Navy in 1991 during the Gulf War and then by NATO against Serbia in 1999. Now, Seoul’s Agency for Defense Development (ADD) has plans to use these bombs against North Korea’s own power systems.
The bombs would be used as part of South Korea’s “Kill Chain” plan, in which the nation would preemptively launch missiles at North Korea if it detected the country was preparing to fire their own.
“All technologies for the development of a graphite bomb led by the ADD have been secured. It is in the stage where we can build the bombs anytime,” a military official told Korean news agency Yonhap.
Preparing for Battle
This news of South Korea’s development of blackout bombs arrives at a time when no one knows for sure whether or not North Korea will act upon its threats of nuclear war. That said, the South Korean military has stated it doesn’t believe an attack is imminent.
“We have yet to detect any signs of immediate provocations from North Korea,” a South Korean military source told Yonhap. “We are maintaining an upgraded monitoring effort to guard against any developments.”
South Korea isn’t the only nation planning for the worst from North Korea. Britain is also revising its own battle plans for war with the dictatorship. Meanwhile, U.S. Defense Secretary James Mattis said during his keynote address at the Association of the U.S. Army’s annual meeting today that the U.S. has “got to be ready” in case it must respond quickly to North Korean activity.
If North Korea were to launch a nuclear attack, the devastation could be catastrophic, so this is one instance when early preparation could truly be a matter of life or death.
While teams of researchers continue the hunt for clues into the nature of elusive dark matter and dark energy, which comprise most of the known universe, other teams are trying to unravel the mysteries of so-called ordinary matter, which contains the protons, neutrons, and electrons with which we’re familiar.
While we can and have observed ordinary matter under a variety of circumstances, we have yet to actually see about half of what we know must be residing in the vast gaps between galaxies, stars, and other objects in space. This missing matter even accounts for much of our own Milky Way.
While long predicted, these gasses have a tenuous nature that has made them impossible to detect using X-ray telescopes. To get around that, both teams made use of the Sunyaev-Zel’dovich effect. This phenomenon occurs when leftover light from the Big Bang passes through hot gas, leaving behind a trace of the gas that can be captured. A map of this effect within the observable universe was produced by the Planck satellite in 2015.
Using data from the Sloan Digital Sky Survey, each team chose pairs of galaxies supposedly connected by baryon strands. Then, to make the faint individual signals more visible, they stacked the Planck signals for these areas. The team at IAS combined 260,000 pairs of galaxies, while the Edinburgh team worked with more than a million pairs.
Their findings were similar. The IAS group found the baryon gasses to be three times denser than the normal mass of matter in the universe, while the other group observed them to be six times denser. In short, both found definitive evidence that the gas between the galaxies was dense enough to form filaments.
This discovery is a remarkable example of how new technologies can allow scientists to test theories made decades prior.
As Harvard-Smithsonian Center for Astrophysics in Massachusetts professor Ralph Kraft, who wasn’t involved in the studies, told New Scientist, “Everybody sort of knows that it has to be there, but this is the first time that somebody – two different groups, no less – has come up with a definitive detection.”
He added, “This goes a long way toward showing that many of our ideas of how galaxies form and how structures form over the history of the universe are pretty much correct.”
Beyond giving scientists the ability to throw around an “I told you so” or two, the discovery of this missing matter also has the potential to improve our current maps of the universe and inform future research. The more we learn about this world in which we live, the more we increase our ability to maneuver within it, perhaps one day reaching the very same distant galaxies studied by these groups of researchers.
Scientists have set a new record in optical measurements using photons, achieving a level of precision that before now was only possible in the world of theory.
In a first-of-its-kind experiment, a team of physicists has broken through what’s called the shot-noise limit – maximizing the amount of information that can be extracted from individual particles of light in optical measurements.
For decades, theoretical physicists have predicted that taking measurements with photons in quantum states – where individual particles of light are entangled – could deliver an advantage over measurements taken with light in non-quantum states.
“When photons are entangled, their properties are correlated, or connected together,” quantum physicist Geoff Pryde from Griffith University in Australia told ScienceAlert.
“This means there is less randomness in the measurement. However, it turns out that these entangled states only work if the entangled photons are high-quality and don’t go ‘missing’.”
That’s because randomness creeps into optical measurements when individual particles of light are unintentionally absorbed or scattered in the measurement device, or are simply not detected.
This limitation, the shot-noise limit, has previously prevented scientists from achieving the theoretical limits of super-sensitive measurements with photons in quantum states, but thanks to new research by Pryde and his team, that’s no longer the case.
“What is new here is that we are able to make and measure high-quality photons with high efficiency (they don’t go missing),” Pryde explains, “and so we can show that the technique really works as described in the theory.”
Measure by Measure
To achieve this, the team passed a laser through a nonlinear crystal, with the properties of each being carefully tailored to give high-quality entangled photons.
Then the photons are passed through a sample – whichever object is being measured, which in the study was quartz crystal.
By doing so, and measuring the photons with their high-efficiency detectors, the team was able to show that the shot-noise limit can be unconditionally broken through, meaning random noise can be avoided in extremely precise optical measurements.
“It shows that photons in certain quantum states really can be used to make certain kinds of measurement better than when we don’t use quantum physics,” explains Pryde.
“We hope that future extensions (entangling many more than two photons together) can be used for precision measurement of sensitive samples.”
As for what those samples could be, the team acknowledges it’s a little early to speculate. But it’s possible that in the future, this kind of method could enable scientists to measure material with very little light at all.
That could be used to measure biological samples with a limited number of photons, so as not to damage delicate specimens.
But for now, the biggest accomplishment is showing that photons can actually be used to measure in this way without being lost – delivering upon the theoretical promise of decades ago.
Nature is, perhaps, the most efficient builder there is. Since life began, examples of inorganic components working with organic material to make material composites abound. Now, scientists from Duke University have effectively harnessed nature’s construction abilities to develop 3D materials. In a study published in the journal Nature Biotechnology, the researchers prove it’s possible to program bacteria to build a device that functions as a pressure sensor.
Growing materials using cellular or bacterial process isn’t new, but the way the Duke researchers harnessed this incredible ability is quite novel. Previous attempts were limited to just 2D structures and depended heavily on external control to make bacteria grow. The new research, however, showed that its entirely possible to let nature do its thing.
“Nature is a master of fabricating structured materials consisting of living and non-living components,” researcher Lingchong You, a Paul Ruffin Scarborough Associate Professor of Engineering at Duke, said in a press release. “But it is extraordinarily difficult to program nature to create self-organized patterns. This work, however, is a proof-of-principle that it is not impossible.”
Basically, You’s team programmed a genetic circuit (or a biological package of instructions) into the bacteria’s DNA. This produced a protein that allowed its own expression in a positive feedback loop, causing it to grow into a dome-shaped bacterial colony until it ran out of food. The bacteria also released small molecules that worked as messengers, which were capable of diffusing into the environment. Once the bacterial colony reached its critical threshold, it began producing two more proteins — one stopped growth, while the other worked as a biological Velcro that could latch into inorganic materials.
You’s team managed to turn their hybrid structure into a pressure sensor. They let the bacteria’s biological Velcro proteins latch onto gold nanoparticles that formed a shell as big as the average freckle. They then connected LED lights via copper wiring on identical dome structures, which were placed opposite each other, sandwiched between separate membranes. When pressed, a deformation increased the conductivity of the domes and lit the LEDs.
“In this experiment we’re primarily focused on the pressure sensors, but the number of directions this could be taken in is vast,” first author Will (Yangxiaolu) Cao explained. “We could use biologically responsive materials to create living circuits. Or if we could keep the bacteria alive, you could imagine making materials that could heal themselves and respond to environmental changes.” A number of other studies have shown that it’s quite possible to program cellular DNA, the most popular of these have allowed for the development of DNA computers and storage devices that use genetic material. You’s team, however, showed that it’s possible to develop 3D materials using an entirely natural process. At the very least, this could become a more efficient and cost-effective fabrication method. The size and shape of the bacterial dome can also be controlled by altering the properties of the porous membrane where they are grown.
“We’re demonstrating one way of fabricating a 3-D structure based entirely on the principal of self-organization,” researcher Stefan Zauscher said in the press release. “That 3-D structure is then used as a scaffold to generate a device with well-defined physical properties. This approach is inspired by nature, and because nature doesn’t do this on its own, we’ve manipulated nature to do it for us.”
We, as humans, have a variety of senses that we use to experience the world around us. But perception can be a tricky thing. It can be fooled by, say, two dimensions of light working together to create the appearance of a 3D object or form. This is how real-world holograms are achieved, and some suggest this illusion could be extended far beyond the confines of some stage or exhibit. In fact, some scientists believe that, at least theoretically, it’s possible that our entire universe is a hologram.
The first inklings of this strange idea emerged in the 1970s when scientists debated the properties of a black hole and how an object’s entropy could be lost while adhering to the second law of thermodynamics. It was found, as stated by Paul Matt Sutter, an astrophysicist, in an interview with Fraser Cain that, “when it comes to black holes, the information content of a black hole is proportional not to its volume, but to its surface area.”
In other words, the information going into a black hole might be directed and maintained onto its event horizon, which is 2D — essentially 3D information stored in 2D space. This field of thought expanded in the 1990s when physicists discovered that when you assume that universe is a hologram, gravity vanishes when you can take away one from whatever number of dimensions you are working with.
These phenomenon are described in the “holographic principle,” which states, simply, that all objects can be explained by the information stored on their surfaces — so the space around you might be 3D, but if you rest on a 2D surface, then perhaps a shadow or reflection of you exists within that 2D space. In other words, a higher dimension of information is, essentially, encoded in a lower dimension of space.
However, the most intensively explored instance of the holographic principle works only under very special circumstances — when 5D space-time is bent back on itself, as New Scientist put it, “rather like the surface of a Pringle.” This “trick” of bending space-time, eliminating gravity, and working under the constraint of the universe being a hologram, has had a variety of practical applications. The holographic principle, a part of string theory, has helped with the advancement of supercomputers and physics problems like explaining why particles have mass.
Raphael Bousso, who has been working on the principle at Stanford University, has confidently stated that “The world doesn’t appear to us like a hologram, but in terms of the information needed to describe it, it is one.”
While its llikely not literally true that the universe is a hologram, the potential for 3D information to be stored in two dimensions has been shown with black holes. This might cause you to yell, “A-Ha! Nothing is real!” at your computer screen, but, as Sutter concretely puts it, the holographic principle is just “a convenient mathematical tool because some questions in physics are super hard.”
Mars famously has the largest volcanoes known to science. The largest is Olympus Mons, pictured below, which towers 22km above the surrounding plains — over two and a half times taller than Mount Everest. This extinct volcano is 640km wide even at its narrowest point, greater than the distance between London and Glasgow, or Los Angeles and San Francisco. And Olympus Mons isn’t alone in the Earth-beating stakes — three other Martian volcanoes are more than 10km high.
Mars is a small world. It is half the diameter and less than 11% the mass of Earth, so the existence of such volcanoes was particularly surprising when they were revealed by the first satellite orbiter pictures collected by NASA in the 1970s. Ever since, scientists have been keen to discover more about these towering mountains — what they are made of, when they first erupted, when they were last active, and why they grew so much larger than anything on our own planet. So how are we getting on?
Spacecraft have sent back stunning images and data about these volcanoes over the years, yielding an amazing amount of knowledge. We have learned a lot from the impact craters made by asteroids, for example, since older areas on the planet have more craters than younger areas.
From this, scientists have concluded that the volcanoes on Mars began erupting well over 3.5 billion years ago, roughly comparable to how far back eruptions go on Earth. The most recent Martian eruptions are perhaps a few tens of millions of years old. No active volcanoes have been discovered; at least not yet.
Scientists also study Martian volcanoes by examining certain meteorites on Earth. Asteroid strikes on Mars are relevant to this as well, since massive amounts of energy are released when big asteroids hit the surface. This is often sufficient to blast other pieces of rock upwards, some of which reach Earth as meteorites.
We have now recovered over 100 samples of genuine Martian space rock: the gases trapped inside them match the Martian atmosphere as recorded by the Viking and Curiosity missions. The meteorites can be examined in laboratories with state-of-the-art machines that are too large and heavy to fit on spacecraft. My colleagues and I have just published the latest such research in Nature Communications. The first detailed analysis of the eruption rates of volcanoes on Mars using Martian meteorites, it involved the Scottish Universities Environmental Research Centre, the University of Glasgow, Lawrence Livermore National Laboratory in California, and the Natural History Museum in London.
To determine when the rocks originally erupted, we used a technique known as argon-argon geochronology. This works by measuring, using a mass spectrometer, the amount of argon built up from the natural decay of potassium. It showed that the meteorites formed 1.3 billion to 1.4 billion years ago from at least four eruptions over the course of 90m years. This is a very long time for a volcano to be active, and much longer than terrestrial volcanoes, which are typically only active for a few million years.We examined six meteorites which had been found in various places over the last century, including the Egyptian desert, Indiana in the American Midwest, and the barren ice fields of Antarctica. They had been ejected into space together around 11m years ago — this is important because it means they must have left Mars following the same asteroid impact crater on the same volcano.
Yet this is only scratching the surface of the volcano, since the asteroid impact will only have excavated rocks buried a few tens of metres below the surface. When we are talking about a volcano that could be upwards of 10km tall, this only represents a very small portion of its history. It must therefore have started erupting before the 1.4 billion-year-old rocks that we have been studying were formed.
We were also able to calculate that this volcano grew exceptionally slowly — about 1,000 times more slowly than volcanoes on Earth. This again indicates that for the Martian volcanoes to have grown so large, Mars must have been far more volcanically active in the distant past. It all serves to support the previous findings I mentioned about Martian volcanoes dating back upwards of 3.5 billion years.
Knowns and unknowns
The other reason for the massive size of Martian volcanoes is that Mars lacks active plate tectonics. This has allowed molten rock to erupt through the same parts of the planet’s crust for very long periods. For terrestrial volcanoes, by contrast, plate tectonics moves them away from their magma sources and brings their eruptions to an end.
The last piece of the puzzle for our Martian meteorites was where they came from. By investigating NASA satellite photos we found a potential candidate: a crater large enough to have ejected meteorites into space, but young enough to be consistent with the 11 million year ejection age, and on volcanic terrain. As yet unnamed, the crater is 900km from the summit of the 12.6km Elysium Mons volcano, over 2,000km north of the present site of the NASA Curiosity rover.
Our research work has underlined the significant differences in volcanic activity between Earth and Mars, but numerous secrets about these Martian wonders remain. Scientists are still debating the mechanisms in the planet’s mantle that drive such volcanoes and keep supplying magma for eruptions in the same places for so long. The age of the most recent eruptions on Mars are also still subject to considerable uncertainty. And there’s much still to be uncovered about the links between the planet’s volcanoes and its atmosphere.
Some of these secrets will continue to be unravelled through studying Martian meteorites, satellite images and new rovers. To truly understand the largest volcanoes in the Solar System, however, we will probably have to collect pieces of our neighbouring planet through human or robotic missions and bring them back to Earth.
Experts have predicted for years that, at some point in the near future, we will make the first human discovery of alien life — and that life will be on Mars. At the same time, researchers have worked tirelessly in trying to pinpoint the origins of life on Earth.
Recently, scientists have discovered evidence that could be key to understanding the beginnings of life — namely, indications suggesting ancient hydrothermal deposits on the “sea-floor” of Mars. This hidden site in the Eridania region of Mars might not hold definitive answers about Martians, but could provide clues to how life evolved on our planet.
An illustration of deposits in the Eridania basin. Image Credit: NASA
NASA’s Mars Reconnaissance Orbiter (MRO) observed these deposits in a basin within a southern region on Mars as MRO’s Compact Reconnaissance Spectrometer for Mars identified minerals in the deposits. Recently, an international report published in the journal Nature Communications analyzed the MRO’s observations and concluded that the deposits were most likely formed by hot water, heated by a part of Martian crust that was volcanically active a long, long time ago.
No life (microbial or otherwise) has been identified in the samples, but Paul Niles of NASA’s Johnson Space Center in Houston explained that, “Even if we never find evidence that there’s been life on Mars, this site can tell us about the type of environment where life may have begun on Earth,” he said in a press release from Jet Propulsion Laboratory. “Volcanic activity combined with standing water provided conditions that were likely similar to conditions that existed on Earth at about the same time — when early life was evolving here.”
These sea-floor deposits are roughly 3.8 billion years old, and around that same time, hydrothermal conditions on the sea-floor of Earth were potentially gearing up for life to begin. Now, because Earth’s crust is still so active, there aren’t clear remnants of this origin period. But this ancient sea-floor on Mars seems to be an excellent candidate for partially simulating Earth’s origin conditions.
“This site gives us a compelling story for a deep, long-lived sea and a deep-sea hydrothermal environment,” Niles said in the press release. “It is evocative of the deep-sea hydrothermal environments on Earth, similar to environments where life might be found on other worlds — life that doesn’t need a nice atmosphere or temperate surface, but just rocks, heat, and water.”
The report which analyzed this discovery states that “Ancient, deep-water hydrothermal deposits in Eridania basin represent a new category of astrobiological target on Mars,” and “Eridania seafloor deposits are not only of interest for Mars exploration, they represent a window into early Earth.”
Scientists at the University of Colorado Boulder’s JILA (formerly the Joint Institute for Laboratory Astrophysics) have developed an incredibly precise quantum atomic clock based on a new three-dimensional design. The project has set a new record for quality factor, a metric used to gauge the precision of measurements.
The clock packs atoms of strontium into a cube, achieving 1,000 times the density of prior one-dimensional clocks. The design marks the first time that scientists have been able to successfully utilize a so-called “quantum gas” for this purpose.
Previously, each atom in an atomic clock was treated as a separate particle, and so interactions between atoms could cause inaccuracies in the measurements taken. The “quantum many-body system” used in this project instead organizes atoms in a pattern, which forces them to avoid one another, no matter how many are introduced to the apparatus. A state of matter known as a degenerate Fermi gas — which refers to a gas comprised of Fermi particles — allows for all of the atoms to be quantized.
“The most important potential of the 3-D quantum gas clock is the ability to scale up the atom numbers, which will lead to a huge gain in stability,” said physicist Jun Ye, of the National Institute of Standards and Technology (NIST), who worked on the project. “We are entering a really exciting time when we can quantum engineer a state of matter for a particular measurement purpose.”
During laboratory tests, the clock recorded errors amounting to just 3.5 parts in 10 quintillion, the first atomic clock to achieve such accuracy.
Watch the Clock
“This new strontium clock using a quantum gas is an early and astounding success in the practical application of the ‘new quantum revolution,’ sometimes called ‘quantum 2.0’,” said Thomas O’Brian, the chief of the NIST’s quantum physics division and Ye’s supervisor. “This approach holds enormous promise for NIST and JILA to harness quantum correlations for a broad range of measurements and new technologies, far beyond timing.”
Atomic clocks have clear-cut applications for tasks like time-keeping and navigation. However, the same technology can be used in various different strands of research — like the ongoing effort to better understand dark matter.
It’s been suggested that monitoring minor inconsistencies in the ticking of an atomic clock might offer insight into the presence of pockets of dark matter. Previous research has shown that a network of atomic clocks, or even a single highly-sensitive system, might register a change in the frequency of vibrating atoms or laser light in the clock if it passed through a dark matter field. Given that this project offers much greater stability than its predecessors, it could contribute to new breakthroughs in solving this persistent cosmic mystery.
A collaboration of dentists and engineers from the University of Nevada, Las Vegas (UNLV) made a discovery that could potentially revolutionize how and where we harvest, preserve, and use stem cells. In a study published in Biomaterials and Biomechanics in Bioengineering, the team led by UNLV biomedical sciences professor Karl Kingsley and advanced education program in orthodontics director James Mah turned to root pulps in teeth, specifically wisdom teeth, as an abundant source of stem cells.
“More and more adults—approximately 5 million throughout the country—have their wisdom teeth, or third molars, removed,” Kingsley said in a piece appearing in the 2017 issue of UNLV’s Innovation magaizine. “Extracting teeth is relatively common among patients undergoing orthodontic treatments. And the majority of those teeth are healthy, containing viable tooth root pulp that offers opportunities for reproducing cells that have been damaged or destroyed by injuries or disease.”
The discovery was the result of rather tedious work. First, the team had to figure out an effective way to harvest stem cells from the tooth pulps. Conventional methods of extraction (drilling, shattering, etc.) often destroy a tooth and its pulp, making it difficult to collect enough stem cell samples. So, they developed an instrument they jestingly called the Tooth Cracker 5000, which cuts a tooth in half. Tests showed a 100 percent success rate in fracturing teeth.
Better Than Embryos?
This new method is able to recover about 80 percent of viable stem cells from fractured teeth. “Saying the test results were promising is a gross understatement,” Mah said. “We realized we’d invented an extraction process that produced four times the recovery success rate for viable stem cells. The potential application is enormous.”
Indeed, stem cells from teeth have the potential to work even better than their embryonic cousins. Stem cells, as you may know, work wonders because they grow into other types of cells of their source organism. Embryonic stem cells work as pluripotent stem cells, which means they can grow into any cell type. The tooth root pulp, on the other hand, is home to both pluripotent as well as multipotent stem cells—those that turn only into specific cell types. Once the method is perfected, it can make embryonic stem cells and the ethical concerns that come with their use a non-issue.
The UNLV team is currently looking at methods to better preserve collected stem cells. They’ve already identified some of the potential uses of tooth pulp stem cells, though: “We already know some populations of dental pulp stem cells can be converted into neurons, which could become therapies for cognitive diseases such as Alzheimer’s or Parkinson’s,” said Kingsley.
Indeed, stem cells revolutionize medical treatment options. “There are potential applications of stem cells for multiple diseases, including cancer, arthritis, and lung disease,” he added. The UNLV team found, however, that pluripotent stem cells from dental pulps decrease dramatically once an adult reaches 30. “The next challenge is reliably collecting the stem cells early enough and storing them successfully so they can be used when needed,” Kingsley added.
Jacques Dubochet, Joachim Frank, and Richard Henderson have been named as the recipients of the 2017 Nobel Prize in Chemistry, honoring their work on cryo-electron microscopy.
The technology has allowed researchers to create biochemical maps, which fill in gaps left by previous methods. Biochemicals can be frozen mid-movement, which allows previously unseen processes to be observed, offering up new insights into the chemistry of living organisms and aiding the development of new pharmaceuticals.
Scientists once assumed that electron microscopes couldn’t be used to inspect living cells, as their electron beams are capable of destroying biological material. However, in 1990 Henderson managed to successfully produce a 3D image of a protein at an atomic resolution.
Frank’s contribution to the technology was developed between 1975 and 1986 when he established a method of combining two fuzzy images produced by an electron microscope to produce one clear 3D model.
Meanwhile, Dubrochet was responsible for developing a method of adding water to the microscope’s vacuum and cooling it rapidly, helping biomolecules to retain their original shape.
A Closer Look
Since Henderson, Frank, and Dubrochet made breakthrough progress in the 1970s and 1980s, they’ve been hard at work ever since to perfect the process. It was only in 2013 that cryo-electron microscopy attained its desired atomic resolution.
The technology has allowed scientists to produce never-before-seen imagery of all different types of biomolecules, from the Zika virus to proteins that cause antibiotic resistance.
“[Cryo-electron microscopy] has opened up essentially a kind of new, previously unapproachable area of structural biology,” said Henderson in an interview with Nobel Media. “But the difference is there are quite a lot of structures in biology that were resistant, were recalcitrant to the other methods, like x-ray crystallography or nuclear magnetic resonance spectroscopy. So it has opened up essentially a kind of new, previously unapproachable area of structural biology.”
The US Air Force has developed a new “Mini Crypto” chip that will allow hardware like unmanned aerial vehicles and explosive ordinance disposal robots to communicate and exchange data securely.
The chip is a self-contained encryption engine that is capable of creating a unique, session-based key that the receiving party needs to use to decrypt the message or data. The process meets the NSA’s stringent security standards, making it difficult and resource-intensive for other entities to access the information being relayed.
The device was designed to be as small and lightweight as possible, to make it convenient in the field. It’s said to be around the size of a cracker, and requires just 400 milliwatts of power to operate — around the same amount as a conventional hearing aid.
It’s crucial for operatives in the field — whether they’re human or otherwise — to be able to communicate back to their commanding officers without the information being intercepted. One of the greatest strengths of the Mini Crypto chip is that it’s “losable,” meaning that it can be deployed into the field without any concerns about retrieving the physical hardware to protect data.
“Mini Crypto’s portability and losability make it ideal for things like precision air drops of supplies,” said Heidi Beason, the program manager for the project at the cryptologic and cyber system division of the Air Force’s joint base in San Antonio, Texas, in a press release. “We think this will allow commanders to send combat material to really austere locations for pickup, and protect their exact location, without putting the troops who need the material at risk.”
Fortunately, in 1859, the world order didn’t completely rely on electricity or these communication systems. But what would happen if a storm like this hit Earth today?
Coronal mass ejections are made up of slow-moving magnetized plasma that creates powerful electrical currents in Earth’s atmosphere. According to experts interviewed by Gizmodo, the electrical ground currents picked up from the atmosphere during a large CME would melt the copper inside electrical transformers, causing massive power outages. That means no internet, refrigeration, heating, air conditioning, or gas pumps, and in many cities, no toilets or sewage systems.
Hospitals would be left to rely on backup generators, which, as we have seen, don’t last long in times of crisis. GPS systems would be knocked out, meaning many airplanes that run on GPS would have to rapidly swap to navigating and landing manually. Astronauts aboard the International Space Station (ISS) would lose communication with Earth as they were bathed in dangerous radiation.
The potential economic and social impact of such a storm could be catastrophic. According to a 2009 National Academies of Sciences report, if we were hit by one today, the cost would be more than $2 trillion, and other reports have estimated figures as high as $10 trillion.
Manasvi Lingam and Avi Loeb of the Harvard-Smithsonian Center for Astrophysics believe future losses could total $20 trillion as we become more dependent on technology. However, they have an idea for how we could prepare for this catastrophe before it happens, ultimately saving trillions of dollars.
While a physical shield would block sunlight (a bad thing for, well, all life on Earth) and an electric shield would mess with the balance of positive and negative particles reaching Earth, the authors believe a magnetic field is a viable option.
“The related engineering project could take a few decades to construct in space,” Loeb told Gizmodo. “The cost for lifting the needed infrastructure to space (weighing 100,000 tons) will likely [cost around] hundreds of billions of dollars, much less than the expected [solar storm] damage over a century.”
Anders Sandberg, a research fellow at Oxford University’s Future of Humanity Institute, told Gizmodo that he thought Lingam and Loeb’s shield would likely cost much more than estimated and that their model makes too many assumptions about the planet’s changing economies. However, he did think it was feasible on the engineering side — a one-centimeter wide, Earth-sized loop of copper wire powered by a solar farm would be enough to accomplish the task.
For a civilization that relies on technology to function, the benefit of such a shield could very well outweigh the costs. As Daniel Baker, Professor of Astrophysical and Planetary Science at the University of Colorado, told NASA, if the 2012 storm that narrowly missed Earth had hit us, we’d still be trying to recover years later.
Since Darwin’s time, scientists have generally agreed that life on Earth started from “warm little ponds” — or the so-called primordial soup that was rich in organic compounds. Life began in its most primitive forms from these waters some 3.8 billion years ago but, how it actually happened has been the subject of endless study and debate. Most recently, a team of researchers from McMaster University and the Max Planck Institute in Germany is contributing to the discussion.
In a study published in the journal of the Proceedings of the National Academy of Sciences (PNAS), researchers led by McMaster scientists Ben K.D. Pearce and Ralph Pudritz, together with Dmitry Semenov and Thomas Henning of the Max Planck Institute for Astronomy, suggested that to understand how life began, one needs to look at Earth as it was billions of years ago. Life started at a time when the Earth’s surface was continuously changing, transforming, and pelted by meteorites — some of which landed in those warm little ponds.
These space rocks carried with them essential elements that helped to jump-start life, including essential nucleotide components that formed RNA polymers. “As our study shows, astronomy provide a vital part of the answer. The details of how our solar system formed have direct consequences for the origin of life on Earth,” Henning said in a press release. A combination of wet and dry conditions allowed for the polymers to bond and, eventually, for RNA to replicate itself— non-living components creating life, as another study said.
A Tough Start
Clearly, life had to emerge during a difficult period in this planet’s geological history, when the atmosphere was not yet fully formed, as explored in another study. “That’s the Holy Grail of experimental origins-of-life chemistry,” Pearce added. What was crucial was the formation of RNAs that learned to self-replicate, which eventually gave rise to DNA development. “DNA is too complex to have been the first aspect of life to emerge,” Pudritz says. “It had to start with something else, and that is RNA.”
All of these recent findings adding to theories of the origins of life lead to the same outcome, as Pudritz pointed out. “Because there are so many inputs from so many different fields, it’s kind of amazing that it all hangs together,” he said in the press release. “Each step led very naturally to the next. To have them all lead to a clear picture in the end is saying there’s something right about this.” Their study clarified, however, that these primordial ponds were more likely to contain the ingredients that triggered life than hydrothermal vents, as another popular theory suggests.
If meteorites landed in ponds on Earth, it’s highly possible that they also peppered other parts of the early Solar System as meteorites were more common during that time. This suggests the possibility that what triggered life on Earth could’ve also happened on other planets, a theory that Pearce and Pudritz plan to explore next year. The origins of life on Earth could help us to find traces of life elsewhere.
Glow-in-the-dark materials are nothing new. They usually work by absorbing ambient light and then slowly releasing the energy from it, a process called persistent luminescence. However, developing phosphorescent materials using inorganic compounds is fairly expensive as it relies on rare metals like europium and dysprosium. But a team of researchers from the Center for Organic Photonics and Electronics Research (OPERA) lab at the Kyushu University in Japan wants to change things, and they’ve designed a glow-in-the-dark paint that is capable of light emission for more than one hour and can be applied to organic materials.
The research is published today in the journal Nature. The team opted for organic carbon materials, like those used in plastics and pigments because, unlike their inorganic counterparts, these are less costly and don’t require high temperatures to manufacture. Others have tried using such organic materials before, but the longest light emission that was ever achieved lasted only a few minutes.
“Many organic materials can use energy absorbed from light to emit light of a different color, but this emission is generally fast because the excitation energy is stored directly on the molecule that produces the emission,” Ryota Kabe, lead author of the study, said in a press release. His team melted two corresponding molecules together — one donates electrons while the other receives — into a film. Applied to organic materials, it was able to emit light for over an hour.
A Longer, Better Glow
Electron acceptors are able to absorb light while using excess excitation energy to remove an electron from so-called donors (electron-donating molecules). This give and take is repeated, resulting in a build-up of charges, similar to those at work in organic solar cells, which produce light similar to OLEDs. The charges can exist in this new mixture in a separated state for a prolonged period of time.
“Our mixtures store the energy in electrical charges separated over a longer distance. This additional step allows us to greatly slow down the release of the energy as light, thereby achieving the glow-in-the-dark effect,” Kabe said. Potential applications of this new technique include applications in existing phosphorescent materials, like street signage, as well as bio-imaging methods that rely on photonics — perhaps it will even lead to improved glow-in-the-dark stickers for your kids’ rooms.
“With organics, we have a great opportunity to reduce the cost of glow-in-the-dark materials, so the first place we expect to see an impact is large-area applications, such as glowing corridors or roadways for added safety,” Chihaya Adachi, the director of OPERA, said in the press release. “After that, we can start thinking about exploiting the versatility of organic materials to develop glow-in-the-dark fabrics and windows, or even bio-compatible probes for medical imaging.”
This study is promising, but more work needs to be done before it can be applied commercially. Since the new process is sensitive to oxygen and water exposure, the researchers are looking at ways to add protective barriers while also checking the possibility of designing new molecular structures that could further increase the duration of light emissions.
Based on a new study, the same equipment that was integral to the work of this year’s winners of the Nobel Prize for Physics — gravitational wave detectors — might be able to provide valuable insight into another enigmatic field of research: dark matter.
“The nature of dark matter is one the greatest mysteries in physics,” the study’s co-author Emanuele Berti noted in a University of Mississippi news release. “It is remarkable that we can now do particle physics – investigate the ‘very small’ – by looking at gravitational-wave emission from black holes, the largest and simplest objects in the universe.”
Berti and an international team of researchers produced calculations that suggest that some kinds of dark matter could form clouds around black holes. The going theory is that these clouds emit gravitational waves that could be detected by certain advanced equipment. “Surprisingly, gravitational waves from sources that are too weak to be individually detectable can produce a strong stochastic background,” explained co-author Richard Brito.
Dark matter is estimated to be five times as abundant as ordinary matter, and yet, no one has been able to directly detect it. It has the potential to unlock all kinds of secrets about the universe, so the great amount of interest in this topic by scientists and astrophysicists isn’t surprising.
If Berti and the rest of his team are correct in their belief that gravitational wave detectors will allow us to finally “see” dark matter, the implications would be tremendous. As Brito noted, “This is a new, exciting frontier in astroparticle physics that could shed light on our understanding of the microscopic universe.”
Indeed, gravitational waves shake the world both literally and figuratively. The “ripples” in spacetime detected by LIGO were the result of a collision between two black holes some 1.3 billion years ago in a rather distant galaxy.
By the time they reach Earth, gravitational waves are already very weak, and detecting them requires the use of extremely specialized instruments called laser interferometers. Although observed for the first time in September 2015, LIGO didn’t officially confirm their detection of gravitational waves until February 2016 as the researchers wanted to be certain of their discovery.
Pushing the Boundaries of Physics
The detection of gravitational waves has changed astrophysics forever, not just because it confirms Einstein’s general theory of relativity, but also because it illustrates our ability to observe the universe in a way that we’ve never done before. Thanks to LIGO, we’re now able to “hear” the universe in a completely unique way. Gravitational waves give us a fresh appreciation and understanding of how the universe expanded and continues to expand.
While the three new Nobel laureates were the pioneers of this work, making invaluable contributions to the LIGO project, this discovery was the product of decades of work by teams of researchers, and Thorne doesn’t want to take all the credit.
“It should go to all the people who built the detector or to the members of the LIGO-Virgo Collaboration who pulled off the end game,” he told The New York Times.
Thorne also expressed his astonishment at how the research followed the path he predicted decades ago. “For me, an amazing thing is that this has worked out just as I expected when we were starting out back in the ’80s,” he noted. “It blows me away that it all come out as I expected.”
Now that the technology is in place, Thorne expects to detect more gravitational waves in the coming years. “An enormous amount of rich science is coming out of this,” he said, and in fact, just last month, LIGO and Virgo astronomers detected their fourth spacetime ripple. Additionally, they now have the ability to accurately pinpoint the source of gravitational waves, adding to the precision with which we can observe this remarkable, world-shaking phenomenon.
Technically speaking, Moore’s law began as a simple observation: In 1965, Intel co-founder Gordon Moore noticed that in the emerging industry of computer electronics it seemed that the number of transistors per square inch on integrated circuits seemed to consistently increasing (roughly doubling) each year. Based on that observation, he predicted that the processing power of computers would increase (and cost decrease) exponentially about every 2 years. He later revised his “law” to 18 months as the rate of growth began to slow, but the basic claim of Moore’s law had already become part of our cultural understanding of computer science, electronics, and the economics involved.
The definition has changed and been challenged over the last few decades, and now it seems that Moore’s law might finally be coming to an end. In new research reported this week in the journal Nature Nanotechnology, a team of researchers at the Massachusetts Institute of Technology (MIT) demonstrated strange magnetic behavior which could greatly improve data storage methods.
Currently, data is read and written one bit at a time — a feat accomplished by altering the placement of magnetic particles. Instead, this new method manipulates “skyrmions” — virtual particles made of small disturbances in the orientation of this magnetism — by using electric fields. These “particles” can store data for much longer than traditional methods.
Geoffrey Beach, an associate professor of materials science and engineering at MIT, led the original study which first documented the existence of skyrmions in 2016. In this new study, he has demonstrated, for the first time, that it’s possible to create the virtual particles in specific locations (when previously documented, the particles’ location was entirely random). This most recent development is what will be key in creating improved data storage systems.
Image Credit: Massachusetts Institute of Technology
Current storage capacities that physically exist in magnetic systems adhere to Moore’s law, but they are reaching their limits. If they were to be replaced by the new method, which uses skyrmions, the law could not only be outgrown — it could be eliminated entirely. What’s preventing a shift to the new method from happening is that we still need a way to read the stored data. It’s possible to use x-ray spectroscopy to accomplish this, but the equipment required is expensive and impractical for use in computers.
There are, however, other potential methods for reading this stored data in theory, but actually creating an option that’s realistically cost-effective and realistically applicable for personal computers, will be the primary engineering challenge to solve.
However the future of this discovery plays out, however this improved method for data storage is applied, one thing is for sure: it’s challenging Moore’s law to the extent that it may become a thing of the past. While it has been a tried and true observation system for decades, as computer science has innovated, it has been difficult for Moore’s law to remain relevant. It will, in fact, be made obsolete by the use of skyrmions as a data storage medium. This innovation is exciting though: it could have major implications not only in how we build our technologies, but may exceed the limits we’ve always assumed exist on how powerful processing can become.
On September 29, a video call took place between Beijing, the capital of China, and Vienna, the capital of Austria. This wasn’t any ordinary call, however: it was the first live demonstration of a call powered and securely encrypted using quantum technology. It marks a huge breakthrough in the realm of quantum communications, and shows the potential impact the technology could have on how information is transmitted and secured.
The quantum video call is the result of a collaboration between researchers at the Chinese Academy of Sciences, the Austrian Academy of Sciences, and the University of Vienna. The call was encrypted by sending information embedded in particles of light (photons) generated by the Micius satellite. Micius was launched last year and successfully used quantum cryptography to send data to Earth back in August.
As explained by the Austrian Academy of Sciences, the photons are sent to ground stations located in China and Europe, as well as the Satellite Laser Ranging Station in Austria’s city of Graz. Using the orbital relay station, communications can bypass the limitations imposed by the curvature of the Earth and the signal loss in long fiber optic cables. More importantly, unlike traditional communication methods (which can be hacked by anyone with the right technical knowledge) the process for quantum communications is said to be unhackable; anyone who attempts to infiltrate the system will immediately be discovered.
“If somebody attempts to intercept the photons exchanged between the satellite and the ground station and to measure their polarization, the quantum state of the photons will be changed by this measurement attempt, immediately exposing the hackers,” explains Johannes Handsteiner from the Austrian Academy of Sciences.
The Most Secure…For Now
It should be noted that quantum communications are only currently unhackable, and that’s largely due to how new the technology is. Someone could eventually devise a way to intercept such communications, which would hopefully, in turn, spur the development of more secure methods. We have some time before that happens, but for now at least, quantum technology is our most secure way of sharing data.
Beijing and Vienna won’t be the only cities to experience quantum video calls, though. According to the Chinese Academy of Sciences, future calls are planned between China and Singapore, Italy, Germany and Russia. Through constant use, we may be able to learn of the technology’s limitations, and the minimum hardware required to make it possible. We said in August that China appears to be leading the charge on quantum technology, and it continues to be the case months later.
Three American scientists — Jeffrey C. Hall, Michael Rosbash, and Michael W. Young — have been awarded the Nobel prize in physiology or medicine thanks to their research into the biological clock of plants, animals, and humans.
In the 1970s, a pair of researchers questioned whether the gene that controls fruit flies’ circadian rhythms could be identified, naming this unknown gene “period.” In 1984, the three Nobel prize-winning researchers succeeded in doing just that, with Hall and Rosbash isolating the period gene at Brandeis University and Young making his breakthrough at the Rockefeller University in New York.
“It was very unpleasant competition in the early ’80s, although we settled down,” Young reflected in an interview with The Guardian. “I think it’s possible we just started to act more like grown-ups because we got older.”
The discoveries didn’t end with the isolation of the period gene, however.
Hall and Rosbash later discovered that the gene encodes a protein known as PER that builds up in the cell during the night before degrading over the course of the day, serving as a type of clock. Young followed up on the research by uncovering two other genes that play important roles in the process. The first, dubbed “timeless,” helps halt the activity of the period gene, while the “doubletime” gene controls how often the entire cycle takes place.
Rhythm of Life
As former Nobel prize winner Sir Paul Nurse told The Guardian, the activities of plants, animals, and humans alike are based on their relationship with the Sun. “Every living organism on this planet responds to the Sun. We on this planet are slaves to the Sun,” said Nurse. “The circadian clock is embedded in our mechanisms of working, our metabolism, it’s embedded everywhere. It’s a real core feature for understanding life.”
That’s no exaggeration. The circadian rhythm can affect behavior, hormone levels, sleep patterns, body temperature, and metabolism, and thanks to the work of Hall, Rosbash, and Young, other scientists are able to delve further into its influence on these core components of natural life.
Whether our circadian rhythm is being affected by genetic defects, environmental factors, or something else entirely, a better understanding of our internal clock can have a huge impact on our health, and we have Hall, Rosbash, and Young to thank for their invaluable contributions to this field.
Despite what their parents might have taught them, physicists aren’t always the best at sharing with others. Yet a public release of data from the Large Hadron Collider (LHC), back in 2014, has already yielded exciting results: for one, it’s allowed physicists to confirm that a fundamental equation correctly describes what happens in the real world.
In a new study, the recently-confirmed equation describes the jets of particles produced when protons collide and split off into their fundamental parts, particles known as quarks and gluons. This exciting finding came as a result of analyzing open data from 750,000 particle jets produced by collisions in the Compact Muon Solenoid (CMS) experiment — one of the LHC’s largest experiments.
“In our field of particle physics, there isn’t the tradition of making data public,” lead author Jesse Thaler, an associate professor of physics at MIT and a long-time advocate for open access in particle physics, said in a press release. “To actually get data publicly with no other restrictions—that’s unprecedented.”
The LHC’s immense data release, published online on the European Organization for Nuclear Research (CERN) Open Data Portal Website, added up to roughly 29 terabytes of information from 300 million high-energy collisions within CMS. The release was the first-ever of its kind from a large collider to the public.
Though one might assume the fundamental equations used by physicists would be unassailable, many mathematical functions used in physics technically remain theoretical; the math tells physicists that they’re correct, but the pattern these equations describe hasn’t yet been observed through experiments in the physical world.
One such equation was the evolution equation, also known as the splitting function, which since the 1970s has been used to describe the pattern of particles put out by proton collisions.
“This idea had not existed before,” Thaler said in the press release. “That you could distill the messiness of the jet into a pattern, and that pattern would match beautifully onto that equation—this is what we found when we applied this method to the CMS data.”
Thaler’s team used the CMS data to examine each particle collision one by one, looking at the most prominent jet from each and categorizing their emissions as particles cleaved from one another.
Colliders have been historically tight-fisted with their data, out of concerns that it could be misinterpreted; often, glitches in the detectors themselves can create the ghosts of new physics phenomena, some persistent enough to fool physicists themselves.
“I think it was believed that no one could come from the outside and do those corrections properly, and that some rogue analyst could claim existence of something that wasn’t really there,” Thaler said.
Yet the team hopes that their success might inspire other colliders to release some information of their own. As Thaler concluded: “Colliders are big endeavors. These are unique datasets, and we need to make sure there’s a mechanism to archive that information in order to potentially make discoveries down the line using old data, because our theoretical understanding changes over time. Public access is a stepping stone to making sure this data is available for future use.”
The first direct detection of gravitational waves on 14 September 2015 proved that massive objects can ripple the structure of space, verifying a key prediction of Albert Einstein’s general theory of relativity. The second detection, made on 26 December 2015 and announced this June, firmly established gravitational waves as a new window to the Universe. But even more exciting are the detections yet to come: the thousands of signals that should soon be observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo experiments. They will transform our understanding of black holes, neutron stars, supernova explosions, and perhaps even the origin and fate of the cosmos itself.
Changes to the fields of physics and astronomy are already being felt. The two events reported so far have significantly increased the number of known stellar-mass black holes, and have demonstrated that black holes can form tight pairs and merge violently within the lifetime of the Universe; such mergers are the inferred cause of the September 14 and December 26 signals. Drawing on data from those two events, my colleagues in the LIGO and Virgo collaborations have tested general relativity in novel ways, far outside our terrestrial experience. And we have shown that black holes collide more often than expected, which has lead some researchers to speculate that black holes might be abundant enough to qualify as a variety of dark matter.
As with any new observational tool, the most important discoveries from the new detectors will surely be the ones that are unexpected. But we also have a good sense of the amazing things that the gravitational universe will tell us, even in the absence of surprises.
First, we can be certain that we will detect many more merging pairs of black holes comparable to the two already detected. The current instruments are about three times less sensitive than their full potential. At their ultimate sensitivity, the two LIGO detectors (in Louisiana and Washington state) and the Virgo experiment (near Pisa in Italy) will register dozens to hundreds of black-hole events per year. This large sample will yield a detailed census of black holes, and will allow astronomers to characterise their population all across the Universe, evaluating theories of how they form.
We also expect to observe mergers of neutron stars, the ultradense remains of stars that were too small to form black holes. Whereas black holes are so extreme that they are breathtakingly simple (completely described by their mass, spin and charge), neutron stars show the Universe at its most bizarre and complex. They contain more mass than our Sun packed into a sphere the size of Manhattan, with magnetic fields that can be more than a billion times as powerful as Earth’s. We do not understand how matter this dense behaves, nor do we know how their magnetic fields are sustained. What we do know is that pairs of neutron stars sometimes spiral into each other. The resulting gravitational waves will give us, for the first time, an unobstructed picture of neutron stars as they interact.
Unlike black holes, naked neutron stars emit light and other forms of radiation. Neutron-star mergers can produce a rapid flash of gamma rays or X-rays, along with a faint optical afterglow that can linger for days or weeks. With LIGO and Virgo operating in concert, we can localise the position of colliding neutron stars to within a few degrees in the sky. Optical telescopes can then search this patch of sky for a fading signal emitted by radioactive material ejected during the merger. This simultaneous observation of gravitational and electromagnetic signals could solve many long-standing mysteries in astronomy, such as the nature of energetic flashes known as short gamma-ray bursts, and the origin of heavy elements, including much of the gold found on Earth.
Gravitational waves can also show what happens in a ‘core-collapse’ supernova explosion, which occurs when the core of a massive star exhausts its nuclear fuel and is crushed under the star’s immense mass. This is an open question in astrophysics, because the mechanism that drives the explosion is hidden deep inside the star. Gravitational waves from supernovae will travel directly from the star’s centre to our detectors. Core-collapse supernovae are exceptionally rare, however; the last such one near our galaxy was in 1987, and the last known event in our galaxy proper was 400 years ago. Gravitational-wave scientists will have to be lucky and patient.
Looking out on an even grander scale, gravitational waves from neutron star mergers will give us a fresh way to study the expansion of the Universe. Our current picture of cosmology – in which the Universe is expanding following the Big Bang, and is accelerating due to an unseen ‘dark energy’ – relies heavily on observations of supernovae in distant galaxies. Gravitational waves will provide complementary information: the intensity (amplitude) of the gravitational signal tells us the distance to the event, while the optical appearance of the merger reveals how much its light has been stretched, or redshifted, on its way to Earth. These two pieces of information define the rate at which the Universe is expanding. Measuring this rate independently will provide an important check of our cosmological models.
Finally, LIGO and Virgo might detect a faint background hum of gravitational waves that pervades the entire Universe, constantly vibrating all of empty space. Many theories predict an omnipresent gravitational energy produced either from the accumulation of astrophysical events such as black hole mergers or from an early, extremely rapid episode of cosmic inflation immediately after the Big Bang. If the hum is loud enough, it will show up as a correlated signal between widely separated detectors such as LIGO and Virgo. Measuring the gravitational-wave background would be a dramatic achievement.
For the next few years, progress in gravitational-wave science will be limited by the sensitivity of the detectors. With each boost to their performance, it’s likely that we will uncover events from new types of sources. Eventually, perhaps after a large international investment in new facilities, progress in the field will be limited only by the willingness of the Universe to provide rare, exotic signals to observe.
LIGO and Virgo have already performed a staggering feat. Consider the properties of the September 14 event: the signal was generated by two objects, each roughly 35 times the mass of our Sun, locked in a decaying orbit the size of Switzerland, circling each other 50 times a second. The energy involved was staggering, briefly exceeding that of all the starlight in the Universe, but the signal that reached Earth was among the most imperceptible things that humans have ever measured. As gravitational-wave detections make the transition from sensational discoveries to routine tools for astrophysics and cosmology, the invisible shaking of space will, paradoxically, illuminate parts of the Universe that were entirely dark until now.
This article was originally published at Aeon and has been republished under Creative Commons.
These widely varying theories have been bolstered by the fact that there is no consensus on how the gases that existed on a lifeless Earth became the molecules that, almost 4 billion years later, led us here. But one new study suggests something very different.
In a paper recently published in ACS Central Science, Israeli researchers made digital models of 12 potential conditions that could have birthed life on Earth. They did this using a combination of gases — carbon (CO, CO2, or CH4) and nitrogen (N2 with or without HCN, or NH3 with or without HCN).
After setting up these conditions, it was time to model a “trigger.” It is known that there must have been some trigger to turn these gases into the life-giving molecules that later created viable organisms. But this team looked beyond previous considerations of lightning or meteorites to trigger this change. They suggested that collapsing bubbles could be responsible for the transformation. That’s right — collapsing bubbles.
Inside of collapsing bubbles, at the microscopic level, pressures and temperatures can exist in such extremes that they can cause chemical reactions. And, because bubbles can be found anywhere from deep under the ocean to the breaking of waves or the crashing of waterfalls, these researchers considered them as a possible trigger for the origins of life through a process known as sonochemical synthesis.
The team’s model of initial Earthly conditions showed them what organic molecules could realistically be produced inside of these bubbles. The computer model reacted and resulted in the creation of transient and stable products.
Of these stable products were what are known as biologically relevant molecules — which could eventually form lipids, amino acids, and nucleotides. Hopefully, this model will be recreated physically and the results can be shown chemically, not just in a computer model, that life could potentially emerge from bubbles.
Once you know that many dinosaurs had feathers, it seems much more obvious that they probably evolved into birds. But there’s still a big question. How did a set of dinosaurian jaws with abundant teeth (think T. rex) turn into the toothless jaws of modern birds, covered by a beak? Two things had to happen in this transition, suppression of the teeth and growth of the beak. Now new fossil evidence has shown how it happened.
In a new study, Shuo Wang from the Capital Normal University of Beijing and colleagues studied a series of dinosaur and early bird fossils to see the transition. They found that some dinosaurs evolved to lose their teeth as they got older and sprouted a small beak. Over time, this process happened earlier and earlier until eventually the animals emerged from their eggs with a fully formed beak.
The oldest birds actually had reptilian-like teeth — for example Archaeopteryx from the late Jurassic period (150m years ago) and Sapeornis from the early Cretaceous (125m years ago). But other early birds had lost their teeth, such as Confuciusornis, also from the early Cretaceous.
Modern birds all lack teeth, except for the South American hoatzin, Opisthocomus, whose hatchlings have a small tooth that they use to help them escape from their egg and then shed. Developmental experiments in the 1980s showed that modern birds could probably generate teeth if their jaw tissue was artificially stimulated with the right molecules. This suggests their ancestors at some point grew teeth naturally.
Meanwhile, many dinosaurs actually did have beaks of some kind. Beaks are composed of keratin, the tough, flexible protein that also makes fingernails and cow horns, as well as feathers and hairs. We typically think of beaks as all-encompassing structures, extending from the pointed tip at the front back to the eyes, and including the nostrils in modern birds. But fossil examples show that many toothed dinosaurs actually possessed a minimal beak at the front of the snout.
To find out exactly how beaks came to replace dinosaur teeth, the researchers had to look inside the animals’ jaw bones. Dinosaur bone fossils are not simply rocky casts of the original bone, but they nearly always show all the internal structure. A microscopic thin section from any dinosaur bone shows all the detail of internal canals for blood vessels and nerves, as well as pits where the bone-generating cells sat. Thin sections of fossil jaw bones show the teeth in as much detail as in any modern jaw bone.
Nowadays, bones are rarely cut up, and it is much more common to use computed tomography (CT) scanning to look inside the bones without damaging them. The CT scans are a closely spaced series of X-rays that allow researchers to construct detailed 3D models showing every fine detail within the bone.
Wang and colleagues observed that the theropod dinosaur Limusaurus, which was closely related to birds’ ancestors, and the early bird Sapeornis had teeth right to the front of the jaws when they were young but lost them as they grew up. The detailed internal scans of the fossils showed adult Limusaurus had no teeth but still had tooth sockets in their lower jaws, closed off and forming a single canal. In adult Sapeornis, there were teeth at the back of the jaw but not at the front of the jaw.
As modern birds develop inside their eggs, the beak keratin begins to form at the tip of the snout and then grows back to cover both upper and lower jaws. Wang and colleagues argue that the mechanisms that regulate beak growth also suppress tooth formation. This is supported by studies of the gene BMP4 that show it controls both functions in modern birds.
Using the fossils to show how the animals evolved over time suggests beaks in some dinosaurs and bird relatives originally expanded backwards as the animals grew up and tooth sockets closed off. Eventually, this process happened earlier and earlier in the developmental cycle until hatchlings emerged with beaks and no teeth. Today, the bone gene BMP4 controls aspects of beak growth and tooth suppression, and these might have been acting early in bird evolution.
For more evidence, Wang and colleagues looked more widely across vertebrates that have lost or reduced their teeth as they evolved, including some fishes, frogs, pangolins, whales and the entirely toothless turtles. In all cases, animals that had lost their teeth were associated with replacement of the teeth by a keratin beak.
These kind of developmental observations help confirm the theory that the exquisite dinosaur fossils point to. In becoming birds, dinosaurs had to change in many ways, including shrinking in size, sprouting wings, adapting feathers that were used for display and flight, improving their senses, shortening their tails, losing teeth, and many other characters. It is important to be able to identify plausible evidence for how each of these amazing changes happened.
Materials scientist Trisha Andrew explains they took a plain pair of cotton gloves and used a vapor deposition method for nano-coating fabric to coat the fingers in a polymer known as poly (3,4-ethylenedioxythiophene), or PEDOT. A coin battery weighing 1.8 grams (0.0039 lbs.) provides power to the gloves, but not enough power to pass through the skin and cause harm.
“We took a pair of cotton gloves and coated the fingers to allow a small amount of current to pass through, so they heat up,” said Andrew. “It’s regular old-fashioned cotton cloth. We chose to make a pair of gloves because the fingers require a high curvature that allows us to show that our material is really flexible.”
The test gloves the team made kept fingers warm for about eight hours, though switching to a rechargeable battery was suggested, as it would enable longer use. They also continued to work after being ripped, repaired, charged over night, and even when submerged in water. The last scenario is especially useful as it means the gloves won’t shock anyone wearing them while in the rain or snow.
Gloves aren’t the only clothing the team is experimenting with, though, as they’ve also coated cotton yarn often used in sweaters. The material yielded favorable results, suggesting the coating method can be used for more than small articles of clothing.
Scientists announced this week that they have once again recorded gravitational waves, ripples in space-time, from a pair of black holes colliding 1.8 billion light years away. They recorded the event on August 14, the fourth time in the past two years that astronomers have detected and recorded such ripples from collisions of black holes. The scientists made the announcement in a Physical Review Letters paper, as well as at a G7 meeting of science ministers in Turin, Italy.
The August collision involved a black hole with a mass of about 31 times that of the Sun, and another with 25 solar masses. Once the two crashed, they created a black hole with a mass of 53 solar masses. In line with earlier gravitational wave detections, the remaining three solar masses transformed into the gravitational waves the scientists detected. The August observations were the result of Virgo’s August 1 debut, a new gravitational wave detector in Italy built by the European Gravitational Observatory.
Earlier detections of gravitational waves were made by LIGO, a pair of L-shaped antennas in Louisiana and Washington. Since LIGO first detected the waves in February 2016 — confirming Albert Einstein’s prediction and verifying the nature of black holes — the scientists working with LIGO have been searching for more insights into the universe. Although the newer Virgo antenna is only one-fourth as sensitive as the LIGO antennas, the network can now triangulate the sources of gravitational waves, allowing optical telescopes to search for any accompanying visible effects sparking in the night sky.
The astronomers will continue working to improve their instruments until fall of 2018 when their next observation run will begin. LIGO Scientific Collaboration spokesman David Shoemaker told the New York Times: “This is just the beginning of observations with the network enabled by Virgo and LIGO working together. With the next observing run planned for Fall 2018, we can expect such detections weekly or even more often.”
On September 20, a massive 7.1 magnitude earthquake shook Mexico’s capital city. Most of the approximately 230 deaths weren’t from the original movement of the ground, however—in this quake, as in many others, collapsing buildings were the primary killer.
During an earthquake, the energy released by the movement of huge segments of the Earth’s crust generate powerful forces that spread across hundreds of kilometers. When the base of a building shakes or lurches from the force of an earthquake, the difference between the top and bottom of the building can introduce new stress and strain onto the materials that support the structure, causing them to rupture.
Over the past few decades, new designs and materials have meant that buildings in Mexico City and elsewhere have become better engineered to withstand the forces involved during a quake, but they are far from infallible. Researchers are developing novel techniques to ensure that future structures are even better equipped to avoid collapse. In fact, they may be able to predict or even prevent earthquakes before they happen. But before we get there, we need to understand how our current structures work (and don’t work).
Andrew Whittaker, a professor of civil engineering at the University of Buffalo, notes that a building during an earthquake is like a person standing on a plank supported on a roller. If someone moves the roller, the base can no longer support the upper weight, which could cause the person to fall. In general, the taller the building, the stronger the difference in forces between the bottom and the top, increasing the likelihood that the material could break.
That’s why many cities in earthquake zones had restrictions on how many stories a building could have, according to Thomas Heaton, a professor of geophysics and civil engineering at the California Institute of Technology.
Of course, some materials are better at handling this changing energy than others. Wood works surprisingly well— “Wood is, in a sense, ideal in that it’s extremely lightweight compared to its structural stiffness and strength,” Heaton tells Futurism. Concrete is typically a brittle material, but when it’s reinforced with steel rebar, it becomes much more elastic when shaken side to side. Others fare far worse: adobe clay buildings (millions of which were destroyed during the 2015 Nepal quake) and masonry (those made of brick or stone and held together by mortar).
The design of the structure, too, can concentrate or dissipate energy from an earthquake. Engineered wood, such as laminated wood, makes up the frame of the house. The beams are supported with sheer walls, which are designed to collapse under pressure from an earthquake, and are connected with metal bracings that loosen during shaking. The whole house is securely fastened to the foundation to prevent it from slipping off during a quake. Many modern skyscrapers do surprisingly well in earthquakes because the force exerted by strong winds demand that they are built to be flexible. Older structures can be retrofitted to be equipped with today’s safer techniques, but that’s sometimes more expensive than simply building something new.
“When I look at [structures] that people think are beautiful, open airy things full of windows, I just see crushed human bodies waiting to happen.”
The structures that would do best in an earthquake would be windowless, concrete entities, the kind of thing that most architects would deem hideous. “When I look at [structures] that people think are beautiful, open airy things full of windows, I just see crushed human bodies waiting to happen,” Heaton says.
Some newer, more sophisticated structures are built with a technique called base isolation. Rubbery devices are installed near the foundation of the building to absorb the energy from an earthquake, preventing it from dissipating throughout the frame. “You can think about it as something like a shock absorber in a motor vehicle scaled up to a building level,” Whittaker says. Base isolation is still expensive, so it’s primarily employed in buildings that are used by large numbers of people or those that would be important in an emergency, such as hospitals and airports.
Apple Park 2, the tech company’s new campus, recently became the largest base-isolated structure in the world.
A Shaken City
Over the past century, engineers and architects have learned a lot about how to make structures more impervious to earthquakes.
But more knowledge is not a foolproof way to prevent damage. The most recent quake in Mexico City was an unfortunate reminder.
It’s far from the first time that a quake in Mexico City has become a cautionary tale — the infamous 1985 earthquake left 10,000 people dead and 30,000 injured. Mexico is wedged between two tectonic plates, making it one of the most seismically active countries in the world. The city itself is built on an ancient lakebed and is actively sinking. Its soft soil is reminiscent of pudding, as one geologist recently told Buzzfeed, so during an earthquake it bounces more than hard rock, which has the potential to cause more buildings to collapse.
Part of the danger comes from the types of buildings that are most common there. “A lot of the buildings in Mexico City are non-ductile structures. In photos [of the destruction] you see beams, columns, and connections—they’ve come apart,” Domniki Asimaki, professor of mechanical and civil engineering at the California Institute of Technology, tells Futurism.
Mexico has excellent engineers and fitting building codes, Whittaker notes. But most buildings in Mexico City are older, built to different specifications that may make them a hazard today. “Most modern constructions and modern codes can sustain earthquake loads pretty well. It’s the older buildings, constructed under older codes or assumptions, that…were not necessarily designed for the same forces,” Jason McCormick, associate professor of civil and environmental engineering at the University of Michigan, tells Futurism.
In this most recent seismic event, too many buildings still weren’t ready. Mexico City simply got lucky that the quake wasn’t stronger, according to the New York Times.
A Better Prepared Future
Anyone building a structure today has available the most sophisticated earthquake technology that has ever existed. But engineers are determined to make the future even better.
New materials might help buildings be better equipped for earthquakes to come. McCormick is developing polymer or metal foams, which are currently used in the aerospace industry, that could be scaled up to fit into joints between structural elements to absorb energy and prevent substantial damage. He’s also working on shape memory alloys that would not only absorb energy, but also re-center a building pushed off-kilter during a quake, reducing the need for repairs. Others are working on self-healing materials — capsules in concrete that would repair a crack in a building frame or foundation that resulted during an earthquake.
Still others are developing materials that could change stiffness depending on the forces being exerted on them. Controlled by an electrical circuit and used at the joints between framing materials, these materials could soften during an earthquake so that they absorb the energy flowing into the building and prevent damage to its structure, Asimaki says. A building equipped with sensors over its base and height could function similarly to dissipate energy from an earthquake.
But there’s a catch. The building industry is notoriously slow to change, Whittaker points out. Any materials created today have to be vetted by the building authorities, then worked into building codes. That means that it might take decades to see these new materials in use. That lethargy is compounded by some complacency in some of the most earthquake-prone regions. In California, for example, a strong quake hasn’t happened since 1994’s Northridge quake in Los Angeles or, you could even argue, since 1906, when a powerful temblor nearly decimated San Francisco. “Even in California, my generation hasn’t seen a big one. My parents haven’t,” Asimaki says. “We don’t have a memory of these things.”
So instead of clamoring for better technology or insisting their residences are better engineered for impending threats, residents in these areas might be satisfied with the bare minimum of earthquake preparedness. That’s especially true for stronger quakes. “I’m confident that we do know how to build structures that can go through the most severe earthquake shaking, but we’re not currently doing that,” Heaton says. “Things have been too quiet. People are too busy making money to really invest in what’s necessary based on things that we’ve learned in last 20 to 30 years.”
In the next few decades, researchers may be able to achieve an engineering feat they’ve worked towards for years: predicting earthquakes. Though we already have early warning systems in places like California and Japan, these give people at most a minute to find a safe place. Future systems could work better, giving people even more time. To truly understand the nature of earthquakes and the chaotic forces involved in them, seismologists need even more data, from satellite data of incremental ground movements to information gathered from sensors deep in the earth—expensive experiments that are too resource-intensive for any single research team, Asimaki notes.
In the next century, cities will be designed to take the dangers of earthquakes into account, Asimaki suspects. Electricity, phone lines, traffic lights, and transportation will automatically shut off before an earthquake hits. That could prevent them from causing further damage, as ruptured electrical and gas lines are some of the biggest reasons that fires plague cities after earthquakes.
Even further into the future, it’s possible that we could diffuse an earthquake’s energy before it ruptures destructively. We could cause a smaller, controlled quake using liquid pressure, the way we are already doing by accident during the fracking process and which NASA has proposed doing to neutralize the enormous volcano under Yellowstone. “That’s definitely far away. But that’s something that’s not impossible,” Asimaki says.
As for the most recent quake in Mexico, engineers and seismologists will learn from it. Figuring out just what those lessons are will take time. And yet it’s surprising that earthquakes are still so mysterious to us. “One of the world’s greatest unsolved physics problems is how earthquakes work in the first place,” Heaton says. With every new quake, even the destructive ones, humans inch closer to understanding them.
A team of neuroscientists from New York University (NYU) sought to observe how memories are transferred over time. Specifically, they wanted to see if the brain would show similar or shared features of different experiences, and understand how this process happens.
They conducted an experiment in which subjects would be shown a series of objects on a computer screen, with each object paired with one of four pictures showing different locations. Afterwards, subjects were tested on their ability to match the objects with the locations; once immediately after the experiment concluded, and again a week later. During the test, the team observed the subjects’ neural patterns associated with individual memories.
The results of this study revealed no overlap in the neural patterns tied to the memories of the object-scene pairings in the test conducted shortly after the experiment. However, during testing a week later, there was considerable overlap in two specific parts of the brain: the hippocampus and the medial prefrontal cortex (mPFC).
“It is as if in order to make sense of the world, the brain re-organizes individual distinct experiences into information clusters — perhaps signaling the emergence of conceptual knowledge,” explains Lila Davachi, an associate professor in NYU’s Department of Psychology. To clarify, the brain organized the subject’s experiences based on the overlap that occurred — the more that the neural patterns overlapped, or had in common, the more likely it was they would be grouped together.
The discovery didn’t end there. The team also noticed that patterns corresponding to details in certain memories became more diminished during organization. This is particularly concerning, as it suggests key details are slowly lost or forgotten after the brain sorts our experiences.
“This aspect of the research points to the tension between ‘good memory’ and learning–if we remember each individual experience as it was encountered, are we able to effectively learn about the underlying regularities across experiences?” asked doctoral recipient and study co-author Alexa Tompary.
Our collective 3D printing capabilities just got a major upgrade. Working with a commercial vendor, NASA has 3D printed a key part of a rocket using two distinct metal alloys for the first time ever. There have been other rocket components created with 3D printing, but until now, they have all been composed of a single metal. This new capability has the potential to decrease the cost of building rockets while at the same time making them safer.
NASA’s team was able to print a rocket engine igniter using the new process. The method — known as automated blown powder laser deposition — allows for a part to be created in a single piece, instead of several pieces which must then be assembled by welding or brazing. The traditional process takes much more time and manpower to complete, which means the new method could cut costs by a third and allow rockets to be built in half the time. When put the test (functionality tests, that is) the part created with this new process passed.
Safer Space Travel
Traditional brazing methods melt together two metals using a filler metal. While the process does forge a strong bond, it still creates a seam. Under the highly demanding conditions of space travel, the intense pressures and temperatures pose the risk of cracking, endangering missions and all astronauts and cargo aboard. The new method eliminates that structural weakness, thereby reducing the risk.
This technology could have a significant impact on the Space Launch System (SLS), NASA’s next-generation launch vehicle meant to help get us to new celestial bodies like asteroids — and even Mars. Steve Wofford, manager for the SLS liquid engines office at NASA’s Marshall Space Flight Center said, “In next-generation rocket engines, we aspire to create larger, more complex flight components through 3-D printing techniques.”
SpaceX is also working to make space travel cheaper in a variety of ways, including manufacturing reusable rocket systems. Efforts like these will continue to open up the possibility for space travel, greatly expanding the potential of learning all we can from space.
Thanks to some new methods of studying the structure of planetary density, we now know that the crust of our closest neighbor, Mars, is much thinner than scientists previously thought. A paper published in Geophysical Review Letters explains that the estimated density of about 2,580 kg per cubic meter (2.58 grams per cubic centimeter), is 96% of our own crust density on Earth, at 2.7 grams per cubic centimeter.
This sounds moot at first blush, but a significantly lower crust density on Mars transforms our understanding of (inter)planetary science.
Obviously, Earth’s proximity makes it easy for us to study the blue mote’s gravitational field, relative to distant bodies like Mars. While measurements taken to calculate density of other planets are limited, NASA researchers expanded the scope of data to make it easier to calculate a more accurate estimation of a planet’s density. To make their calculations, they combined topological data about the geography and structure of the Martian surface with the gravitational data they already had. In a statement, the researcher Greg Neumann said, “As this story comes together, we’re coming to the conclusion that it’s not enough just to know the composition of the rocks. We also need to know how the rocks have been reworked over time.”
These findings will help researchers learn more about what is going on under the surface of the Red Planet. Aside from the information about the planet’s density, the researchers were also able to learn more about the makeup of the planet.
They gleaned that the surface of Mars is much more porous than previously thought, with volcanic regions showing denser makeup than others. Tanya Harrison, Martian scientist and director of research for the Space Technology and Science Initiative at Arizona State University told Gizmodo, “The cool thing about this study is that the method they developed can be applied to any body where we have data about its gravity and topography.”
All of this information could help us better prepare for future missions to the Red Planet, when we’ll finally have a shot at studying it in person. Ultimately, this new method advances the study of (other) rocky planets, allowing us to peer into their history, adding to our study of the composition of matter in the greater universe; its distribution, behavior, and origin.
For years, supercomputers have provided hope for insight into some of science’s most mysterious and seemingly unanswerable problems. The continued advancement of quantum computing has given scientists renewed hope, but a recent study from UK and Russian researchers takes the potential one step further by combining light and matter to form what is known as “magic dust.”
Based in Cambridge, Southampton, and Cardiff Universities in the UK and the Skolkovo Institute of Science and Technology in Russia, these researchers have demonstrated that this magical combination could potentially allow us to surpass the capabilities of even the most advanced supercomputers. Quantum particles known as polaritons, which are half light and half matter, were shown to “light the way” to simple solutions when given complicated problems. The results of this study, as reported in the journal Nature Materials, could eventually lead scientists to solve the currently unsolvable.
When calculating a mathematical solution to a complex problem with real-world applications, it is essential to ensure the minimal number of steps possible. The most direct path to an answer keeps the risk for confusion or mistakes low, but when tackling the most intricate problems of our known universe, this becomes a seemingly impossible task. “This is exactly the problem to tackle when the objective function to minimise represents a real-life problem with many unknowns, parameters, and constraints,” said the paper’s first author, Professor Natalia Berloff of Cambridge’s Department of Applied Mathematics and Theoretical Physics and the Skolkovo Institute of Science and Technology.
Berloff, along with her team, designed this use of “magic dust” from a fairly creative angle. As the will-o’-the-wisp lights a path for travelers in Scottish folklore, polaritons act as easily detectable markers, guiding scientists towards a solution. Select atoms like gallium, arsenic, indium, and aluminum are stacked and a laser is directed at them. The electrons in this light-matter mashup absorb the light and emit light in different colors. 10,000 times lighter than electrons, polaritons could reach densities that would make it a Bose-Einstein condensate, a new state of matter in which the quantum phases of these polaritons would both sync up and create a macroscopic quantum object detectable with photoluminescence. These scientists are, quite literally, creating beacons of light.
Co-author Professor Pavlos Lagoudakis, Head of the Hybrid Photonics Lab at the University of Southampton and the Skolkovo Institute of Science and Technology (where the experiments were performed) expounded, “We are just at the beginning of exploring the potential of polariton graphs for solving complex problems…We are currently scaling up our device to hundreds of nodes, while testing its fundamental computational power. The ultimate goal is a microchip quantum simulator operating at ambient conditions.”
It isn’t just the depths of astrophysics that contain unsolvable problems. Biology, finance, space travel, and others have deep wells of unanswered questions. Questions that a supercomputer using magic dust to light the way to a simple solution might be able to answer.
Human evolution can seem like a phenomenon of the distant past which applies only to our ancestors living millions of years ago. But human evolution is ongoing. To evolve simply means that mutations — the accidental changes to genes that happen normally in the process of copying DNA — are becoming more or less common in the population over time.
These changes can happen by chance, because the individuals who reproduced happened to carry a particular mutation somewhat more often than individuals who didn’t have children. They can also happen because of natural selection, when carriers of a specific mutation are better able to survive, reproduce, or tend to their family members — and therefore leave more descendants. Every biological adaptation, from the ability of humans to walk upright on two feet to flight in birds, ultimately traces back to natural selection acting on these minute changes, generation after generation.
So humans are definitely still evolving. The question is whether we are still adapting: are individuals who carry harmful mutations living less long, reproducing less — ultimately leaving fewer descendants? For instance, terrible eyesight may have been a major survival disadvantage living on the savanna, but with glasses and laser surgery, it’s unlikely to prevent people from living a long life today. How commonly then are mutations under selection in contemporary humans?
Long Time Scale Makes Evolution Hard to Study
Because adaptations involve tiny changes in the frequencies of mutations from generation to generation and their fortune plays out over tens to hundreds of thousands of years, they are incredibly hard to study directly — at least in long-lived organisms such as people.
So while there is overwhelming evidence for human evolution and unequivocal footprints of adaptation in the genome, rarely have scientists been able to directly observe natural selection operating in people. As a result, biologists still understand very little about the workings of natural selection in humans.
Indeed, one of the clearest footprints of a past adaptation in the human genome involves a mutation that permits milk to be digested in adulthood. This mutation in the lactase gene rapidly rose in frequency with the rise of dairy farming thousands of years ago, independently in multiple populations. It’s the reason some people can drink milk as adults, whereas most remain lactose intolerant.
But even in this well-studied case, let alone for the rest of the genome, researchers don’t know whether the mutation was beneficial for survival or for reproduction; whether the benefits were the same for both sexes, or across all ages; or whether the benefit depended on the environment (for instance, availability of other food sources). As pointed out by evolutionary biologist Richard Lewontin in the 1960s, to learn these properties of natural selection would require a massive study, in which genetic and genealogical information is obtained for hundreds of thousands of people.
Fifty years later, our group realized that this thought experiment is starting to become feasible. We sought large biomedical data sets that would let us learn about mutations that affect survival.
Looking at Gene Frequency Across Age Groups
Our basic idea was that mutations that lower the chance of survival should be present at lower frequency in older individuals. For example, if a mutation becomes harmful at the age of 60 years, people who carry it have a lower chance to survive past 60 — and the mutation should be less common among those who live longer than that.
We therefore looked for mutations that change in frequency with age among around 60,000 individuals from California (part of the GERA cohort) and around 150,000 from the U.K. Biobank. To avoid the complication that people whose ancestors lived in different places carry a somewhat different set of mutations, we focused on the largest group with shared ancestry within each study.
Across the genome, we found two variants that endanger survival. The first is a variant of the APOE gene, which is a well-known risk factor for Alzheimer’s disease. It drops in frequency beyond age 70. The second harmful variant we found is a mutation in the CHRNA3 gene. Associated with heavy smoking, this inherited mutation starts to decrease in frequency at middle age in men, because carriers of this mutation are less likely to survive longer.
Both deleterious variants only had an effect long after the typical ages of reproduction for both females and males. Biologists usually consider such mutations to not be under selection. After all, by late middle age, most people have already passed their genes on to whatever offspring they’ll have, so it seems like it might not matter how long they live beyond that point.
Why then would we only find two, when our study was large enough to detect any such variant, if common in the population? One possibility is that mutations that only imperil survival so late in life almost never arise. While that is possible, the genome is a large place, so that seems unlikely.
The other intriguing possibility is that natural selection prevents even late-acting variants from becoming common in the population by natural selection, if they have large enough effects. Why might that be? For one, men can father children in old age. Even if only a tiny fraction of them do so, it may be enough of an evolutionary fitness cost for selection to act on. Survival beyond the age of reproduction could also be beneficial for the survival of related individuals who carry the same mutations, most directly children. In other words, surviving past typical reproductive ages may be beneficial for humans after all.
Your Mutations do Influence Your Survival
In addition to examining one mutation at a time, we were also interested in considering sets of mutations that have all been shown to influence the same trait, and might have very subtle effects on survival individually. For example, researchers have identified approximately 700 common mutations that influence height, each contributing only millimeters. To this end, we considered tens to hundreds of mutations that shape variation in one of 42 traits.
We found genetic mutations linked to a number of diseases and metabolic traits that decrease survival rates: individuals who are genetically predisposed to have higher total cholesterol, LDL cholesterol, risk of heart disease, BMI, risk of asthma, or lower HDL cholesterol tend to die younger than others.
Perhaps more surprisingly, we discovered that people who carry mutations that delay puberty or the age at which they have their first child tend to live longer. It was known from epidemiological studies that early puberty is associated with adverse effects later in life such as cancer and obesity. Our results indicate some of that effect is probably due to heritable factors.
So humans carry common mutations that affect their survival and natural selection appears to act on at least a subset, in some contemporary environments. But what is bad in one context may well not be in another; as one example, the CHRNA3 variant has an effect because people smoke. These are early days, however, and our findings offer only a first glimpse of what can soon be gleaned from millions of genomes, in combination with genealogical records. In future work, it will be important to study not only lifespan, but also the number of children and grandchildren individuals leave, as well as populations and environments worldwide.
For the longest time, metal-based manufacturing has been difficult and costly. High-strength aluminum alloys are difficult materials to work with for additive manufacturing, also known as 3D printing. Now, researchers from HRL Laboratories have developed a new method that allowed them to 3D print high-strength aluminum and weld previously unweldable material.
The HRL researchers came up with a method they’ve called nanofunctionalization, where nano-functionalized powders are fed to a 3D printer. This is applied in thin layers which are heated by a laser to solidify into a three-dimensional object. During melting and solidification, the structures produced using this method don’t crack and are able to maintain their full alloy strength, thanks to the nanoparticles acting as nucleation sites for the intended alloy microstructure.
“Our first goal was figuring out how to eliminate the hot cracking altogether. We sought to control microstructure and the solution should be something that naturally happens with the way this material solidifies,” Martin said.
Finer and Stronger Structures
High-strength alloys like aluminum — including types like Al7075 and Al6061 — are currently used in engineering aircraft and automobile parts, like in airplane fuselages. However, current methods are largely expensive and are unable to allow finer manipulation of these materials.
Now, with this HRL’s new nanofunctionalization technique that’s easily scalable, it’s possible to 3D print these high-strength alloys in all shapes and sizes. This allows faster, cheaper, and more detailed manufacturing using high-strength materials. Furthermore, because melting and solidification in 3D printing is akin to welding, their technique makes it possible to weld previously unweldable alloys.
To determine which particles had the properties they needed, the HRL team asked help from Citrine Informatics. “The point of using informatics software was to do a selective approach to the nucleation theory we knew to find the materials with the exact properties we needed,” HRL’s Brennan Yahata explained. “Once we told them what to look for, their big data analysis narrowed the field of available materials from hundreds of thousands to a select few. We went from a haystack to a handful of possible needles.”
The Engine, launched by MIT and based out of Cambridge, Massachusetts, has announced it is investing $200 million dollars into startup companies focusing on “tough tech,” or technology that takes time to commercialize and is often deemed to be risky by other firms and companies.
The Engine was initially formed in 2016 to provide ambitious projects with the funding, facilities, and services necessary to turn those ideas into a reality. The group’s announcement marks the beginning of its first investment fund, planting the first seeds in what MIT President Rafael Reif has dubbed the “innovation orchard.”
Seven startups, or Founders, will receive the funding and support of The Engine. Each of these Founders are based out of Boston and focus on a wide range of areas, including aerospace, advanced materials, biotech, genetic engineering, and renewable energy. These companies are: Analytical Space, Baseload Renewables, iSee, C2Sense, Kytopen, Suono Bio, and Via Separations.
“Tough tech companies have historically been underserved and underfunded, leaving many breakthrough inventions stuck in the lab,” said The Engine CEO and Managing Partner Katie Rae in a press release. “By building a first-of-its-kind organization with a mission to focus exclusively on founders working on tough tech, The Engine aims to transform the status quo venture ecosystem and to make Boston the heart of tough tech innovations.”
She added, “Today’s group of startups have massive ambitions and are working on transformative innovations poised to solve some of the world’s most complex challenges.”
Funding the Future
The Engine’s choice of startups could bring life-changing results to the world. Baseload Renewables, for example, seeks to replace fossil fuels with renewable energy, with the hopes of eventually creating an energy grid that no longer relies on carbon — an eventually, reduce our contributions to climate change, which has affected the nature of hurricanes and has the potential to lead to much worse.
Meanwhile, iSee is working on autonomous cars, which will undoubtedly save lives by preventing accidents resulting from human error. The team’s system, in its most ideal form, can predict potential outcomes as well as deal with unpredictable situations that may arise while on the road. Early tests have proven their system and AI deep learning to be more effective than machine learning.
The complexity of the human body and its physiology — features such as the immune system, for example — demand an astronomical level of precision in both timing and formation. Each genetic element must be activated at an exact moment and location to function properly. Cells mature to take over their intended functions in “enhancer” clusters, but in order to be expressed, these clusters must first reach specific “promoter” regions across long distances of genomic wasteland.
These two elements must be paired up in just the right way for the systems of the human body to function properly. In fact, discord between promoting regions and enhancer clusters can cause diseases such as lymphoma and leukemia. In other words, a lack of timing at the physiological level can be fatal. So, how does the body get this right?
University of California San Diego biologists have discovered the linchpin to what they’re calling the “big bang” of immune cell development: the previously overlooked “non-coding” regions of DNA between genes.
They found that these regions of DNA activate a change in DNA’s 3D structure that unites promoters and enhancers with a high degree of accuracy. The process they observed in mice allowed T cells to develop properly. These cells are the building blocks of the immune system, which is why problems with their development can cause diseases such as leukemia and lymphoma.
Harnessing Forgotten DNA
This new research illustrates the extremely precise nature of our biology, a point not lost on UC San Diego Division of Biological Sciences laboratory head Cornelis Murre.
“Nature is so clever. We think of the genome as an unstructured strand, but in fact, what we are seeing is a highly structured and meaningful design,” he noted in a press release. “The process of architecture remodeling we’ve described allows the enhancer and promoter to find each other in 3D space at precisely the right time. The beauty is that it’s all very carefully orchestrated.
Murre and the rest of the research team believe that although their results were focused on T cells in mice, this same mechanism may be present throughout plant and animal species. “We have seen one example, but there are likely many others all occurring at the same time when cells are moving along the developmental pathway — that’s kind of amazing,” he asserted.
Understanding T cell development can also help us understand how and why the process fails, perhaps illuminating ways to prevent this failure and the diseases that result from it. “The implications of these results are not only how normal T cells develop, but that tumor suppression is regulated through this mechanism, at least in part,” said Murre. “Ultimately, we may be able to fix mutations associated with disease and these forgotten strands of DNA.”
Wrapping your messages up with quantum weirdness is a great way to avoid sneaky eavesdropping, but in its current form it’s more like Morse code than high speed ADSL broadband.
That could be set to change with a radical advancement in quantum technology that copies entangled patterns of light to help relay quantum codes, potentially opening the way for quantum communications with an infinite number of channels to send encrypted transmissions.
A team of Scottish and South African researchers have provided the world’s first experimental demonstration of something called entanglement swapping, which allowed them to transmit the orbital angular momentum (OAM) of photons of light further than before.
The tl;dr version of this is pretty straight forward. Quantum information can now be relayed through a series of photons, reducing the risk of losing data and allowing them to make better use of light’s spatial patterns to carry more data.
If you want details, buckle-up and put on your fancy quantum pants.
For the better part of a century we’ve accepted – more or less – that we can only describe a particle’s properties in the context of a system that measures it. If a particle doesn’t bump into stuff that lets us give those properties a number, it continues to exist in a blurry state of infinite probabilities.
Here’s the weird thing; if that particle interacted with another particle in some way before we measured it, we can say that other particle is also part of that measurement system. We say both it and the measured particle are entangled.
Measuring the first particle’s properties is what turns it from a maybe into an actually. That measurement also turned its entangled partner from a maybe into an actually at the exact same time.
Einstein thought something was missing in this theory, rubbishing the whole thing as “spooky” on numerous occasions, but here we are nearly a century on and it’s still something we can’t quite wrap our heads around.
Nonetheless, we can use this weird process of entanglement to make super-complex codes that can’t be intercepted, providing for extremely powerful security.
Picture two sequences of entangled ‘maybes’ (which are called qubits) being transmitted to two separate points.
Each receiver can tell if their own message had been interfered with by deciphering the properties of their transmission and checking with the other person if it corresponds.
If they fail to match up, somebody swapped their photons for duds.
But there’s one problem with all of this – transmitting a sequence of qubits over a distance puts them at risk of getting lost.
Quantum communication has been big news recently with recent reports of entangled photons being sent down from space using a split laser beam, covering a distance of 1,200 kilometres (about 750 miles).
That was a pretty extreme feat, but is still an absurdly short distance when we’re looking at global networks. The transmission also required a direct line of sight.
This new method is basically an amplifier that can be set up at regular intervals, allowing entangled particles to pass on their quantum states.
The key is that entanglement swapping phenomenon.
Imagine two pairs of entangled photons – A1 and A2, and B1 and B2. Measuring one from each pair together, such as A1 and B1, entangles those in the same system in what’s called a Bell-state measurement.
What this means is A2 and B2 are, by virtue of their previous partners, now also entangled – even though they’ve never shaken hands before.
This is the swapping part of the entanglement, and can form the basis of a repeater that allows short-distance quantum messages to be copied for another journey without being considered as eavesdroppers.
As it is, the quantum states are usually binary, making it little better than a Morse code of dots and dashes.
That’s not the end of the world, but if we’ve learned anything from a history of information technology it’s that there’s no such thing as too much bandwidth.
That’s where orbital angular momentum comes in. Think of it as a kind of twisting of a photon, not unlike polarisation.
Rather than building messages out of binary ones and zeroes, or dots and dashes, the OAM can be used to carry more information per particle.
This isn’t new in itself; but previously, sending information encoded as this kind of spatial mode required a large number of photons to cover the distance.
Entanglement swapping means those photons can be sent in repeats of short distances. What’s more, potentially other kinds of spatial modes could be used to carry information, opening the way to a virtually infinite number of channels.
Quantum mechanics might be weird enough to leave Einstein with a headache, but better get used to it. The future just keeps getting spookier.
Since the mid-twentieth century, two theories of physics have offered powerful yet incompatible models of the physical universe. General relativity brings space and time together into the (then) portmanteau space-time, the curvature of which is gravity. It works really well on large scales, such as interplanetary or interstellar space.
But zoom into the subatomic, and things get weird. The mere act of observing interactions changes the behavior of what is (presumably) totally independent of observation. In those situations, we need quantum theory to help us make sense of it all.
Though scientists have made some remarkable attempts to bring these estranged theories together, viz., string theory, the math behind the theories remains incompatible. However, new research from Antoine Tilloy of the Max Planck Institute of Quantum Optics in Garching, Germany, suggests that gravity might be an attribute of random fluctuations on the quantum level, which would supplant gravity as the more fundamental theory and put us on the path to a unified theory of the physical universe.
In quantum theory, a particle’s state is described by its wave function. This function allows theorists to predict the probability that a particle will be in this or that place. However, before the act of verification is made via measurement, no one knows for sure where the particle will be, or if it even exists. In scientific terms, the act of observation “collapses” the wave function.
Here’s the thing about quantum mechanics: it doesn’t define what a measurement is. Who — or what — is an observer? A conscious human? Bracketing all explanations to observed phenomena, we’re stuck with paradoxes like Schrödinger’s cat, which invites us to consider the equal possibilities that a previously boxed cat is, as far as we know, simultaneously dead and alive in the box, and will remain as such until we lift the lid.
One attempt to solve the paradox is the Ghirardi–Rimini–Weber (GRW) model from the late eighties. It incorporates random “flashes” that can cause the wave functions in quantum systems to spontaneously collapse. This purports to leave the outcome unbesmirched by meddling human observation.
Tilloy meddled with this model to extend quantum theory to encompass gravity. When a flash collapses a wave function, and the particle reaches its final position, a gravitational field pops into existence at that precise moment in space-time. On a large enough scale, quantum systems have many particles going through innumerable flashes.
According to Tilloy’s theory, this creates a fluctuating gravitational field, and the gravitational field produced by the average of these fluctuations is compatible with Newton’s theory of gravity. If gravity comes from quantum processes, but nevertheless behaves in a classical (or Newtonian) way, what we have is a “semiclassical” theory.
However, Klaus Hornberger of the University of Duisberg-Essen in Germany cautions the scientific world that other problems must be tackled before Tilloy’s semiclassical solution can warrant serious consideration as a unifying theory of fundamental forces underlying all modern physical laws. It fits Newton’s theory of gravity, but Tilloy’s yet to work out the math to show that the quantum theory also describes gravity under Einstein’s theory of general relativity.
With the greatest explanatory power, physics is one of the most exciting scientific disciplines. But the key to unified theories in physics is patience. As with Schrödinger’s cat, the will-to-know alone cannot fill in the gaps of what we simply don’t yet know.
For decades, astronomers have known that the Earth is consistently struck by high-energy cosmic rays — charged particles that are usually the nuclei of elements — that originate from a source in space outside our solar system. These cosmic rays possess the highest possible energies observed in nature, even higher than what man-made particle accelerators can reproduce, and now, a team of scientists thinks they might have solved the mystery of their origin.
In a study published Science, the researchers, known collectively as the Pierre Auger Collaboration, suggest that these cosmic signals may originate from outside of the Milky Way. Their conclusions were drawn using recordings from the Pierre Auger Observatory in Argentina, the largest cosmic ray observatory currently in existence, and other data.
Although cosmic rays with energy greater than two joules rarely reach Earth, when they do, their interaction with nuclei in the planet’s atmosphere creates a shower of electrons, photons, and muons, making them detectable by researchers. These showers of more than 10 billion particles spread out across diameters measuring several kilometers.
When one of the particles within this shower hits one of the Pierre Auger Observatory’s 1,600 detectors, which are spread out over an area of 3,000 square kilometers (1,158 square miles), researchers can determine its originating direction. In the new research, the Auger collaboration studied the arrival directions of more than 300,000 cosmic particles and discovered that the arrival rates of the cosmic rays vary and aren’t uniformly spread in all directions. The rate is actually higher for certain directions.
According to the team, this anisotropy indicates an extragalactic origin for the cosmic particles, as many are coming from an area where the distribution of galaxies is fairly high. However, because the direction points toward a broad area of the sky, the specific sources remain undetermined.
Our Celestial Origins
We still have much to learn about cosmic rays, and the Pierre Auger Collaboration expects to supplement their findings when upgrades to the Auger Observatory are completed in 2018. Still, this new discovery is worthwhile. Any new knowledge about these particles can help us better understand matter from outside the solar system and, as this research suggests, from outside the Milky Way.
“We are now considerably closer to solving the mystery of where and how these extraordinary particles are created, a question of great interest to astrophysicists,” University of Wuppertal professor Karl-Heinz Kampert, a spokesperson for the Auger collaboration, said in a press release.
Figuring out the mechanisms behind these cosmic rays could help explain how galaxies form and what in their composition accounts for the creation of such high-energy particles. Furthermore, since these cosmic rays are made of particles that are also found on Earth, they could also provide important clues into the fundamental questions about our origins — perhaps even the origins of the universe itself.
For the first time, a team of researchers has sequenced a tiny, exclusively asexual species of worm that originated around 18 million years ago. This work confirms that this species is among the oldest known living lineages of asexual animals. The research also explains how this worm has evaded the dead end that asexual organisms usually see thanks to the forces of evolution.
“Scientists have been trying to understand how some animals can survive for millions of years without sex because such strict, long-term abstinence is very rare in the animal world,” New York University Biology Professor and co-author David Fitch said in a press release. “This phenomenon is a significant one in understanding evolutionary genetics because it runs counter to the widely accepted view: that sexual reproduction is required to eliminate deleterious mutations and for adaptation to a changing environment.”
Inheriting copies of both parents’ genes is usually a good insurance policy for animals; it prevents the accumulation of mutations that might prove harmful, a process known as complementation.
“In the long term, producing offspring via intercourse allows for adaptation to changing conditions over time because it produces variation through genetic shuffling, or recombination,” Fitch explained in the release. “However, because such shuffling does not occur within asexual species, they tend to go extinct rapidly. So, it has been a longstanding mystery in biology how some asexual animals have survived for so many generations.”
Diploscapter pachys, the newly sequenced worm, is a small, transparent, asexual, free-living roundworm (also known as a nematode). It is closely related to Caenorhabditis elegans, a sexual organism commonly used for biomedical research. The team derived the genealogy of D. pachys using DNA to reveal that it originated approximately 18 million years ago among a group of exclusively asexual species.
Clones Built to Survive
As the team investigated how D. pachys reproduces more closely, they discovered that the germ cell-making process had been altered to prevent recombination — something common in many other asexual organisms. “Basically, the animals were cloning themselves,” Fitch stated in the release.
Furthermore, the scientists were surprised to learn that D. pachys clones contained only a single pair of chromosomes. Single-chromosome pairs like this one are exceedingly rare in higher organisms; in fact, there are only two other known animal species with a single-chromosome pair. Even very similar species, such as C. elegans, have 5 to 7 chromosomes.
Next, the team sequenced the genome of D. pachys. They aimed to test the structure of the single chromosome, to see whether it arose as other chromosomes were lost, or whether it was actually multiple ancestral chromosomes fused together. They found that D. pachys skips meiosis—the process by which sexual organisms halve their chromosomes and produce gametes—as well as recombination. Instead, the worm fuses the six chromosomes of its ancestor into a single chromosome, maintaining high genetic diversity asexually.
“Thus, the mystery of its longevity seems largely resolved: D. pachys overcomes the disadvantages of asexual reproduction by maintaining genetic variation, and with it, complementation,” Fitch explained. “Ironically, this is accomplished by making sure there is no recombination between the gene copies. If there were, the differences between the gene copies might be lost. In fact, D. pachys has gotten rid of several of the genes required to make the recombination machinery that exists in sexual organisms.”
Studying these results could help scientists develop a deeper understanding of asexual reproduction, and could have implications on human reproduction in the future. This work may provide insight into how humans could someday create clones that survive for generations, avoiding death and extinction at the species level.
The United States Department of Energy (DOE) is going blue, investing in what could be a biofuel alternative to fossil fuels. The department’s Advanced Research Projects Agency-Energy (ARPA-E) is funding two technologies that will attempt to make large scale production and cultivation of macroalgae—the technical term for seaweed—possible. The macroalgae will then be converted into biofuel and bio-based chemicals.
The two projects are funded under the cleverly-acronymized Macroalgae Research Inspiring Novel Energy Resources (MARINER) program.
According to ARPA-E Acting Director Eric Rohlfing, the US has the potential to produce enough biofuel from macroalgae to supply ten percent of the country’s annual transportation energy needs.
Each project has been awarded $500,000 to carry out their research. The first, from the University of Alaska Fairbanks, is developing “replicable scale model farms capable of the cost effective production of sugar kelp, a type of seaweed,” according to the DOE. The second group is from the University of Southern Mississippi and is working on “a semi-autonomous enclosure to contain fields of free-floating Sargassum mats. Wave-powered tugs, operated remotely onshore by a single person, will move the enclosure to ensure maximum exposure to nutrients.” Both are early efforts to build the infrastructure necessary to make this goal possible.
To help learn how to maximize growing potential, a team from the Pacific Northwest National Laboratory will additionally be given more than $2 million, over the course of two years, to develop models to predict the best locations and times to farm seaweed.
The march toward a majority of carbon-neutral vehicles on our roads is a slow one, and the vast infrastructure upgrade needed to make this viable is a major undertaking. Perhaps seaweed cultivation can help to mitigate the current impact of fossil fuel powered cars in a cheap, easy, and environmentally friendly manner.
The early oceans and atmospheres of Earth had no free oxygen, although photosynthetic cyanobacteria produced it as a byproduct. Free oxygen isn’t combined with other elements such as nitrogen or carbon, and aerobic organisms like humans need it to survive. About three billion years ago, small pockets of free oxygen started to appear in the oceans, and then about 2.4 billion years ago, a rapid increase in atmospheric oxygen took place. During this period of about 200 million years, the amount of free oxygen in the atmosphere suddenly jumped by about 10,000 times. This time is known as the Great Oxidation Event, and it transformed the Earth’s surface chemical reactions entirely.
University British Columbia geologist Matthijs Smit and his colleague, University of Bern professor Klaus Mezger, knew that the Great Oxidation Event also transformed the composition of continents, so they began to study records of the geochemistry of igneous rocks and shales from all over the world to find a link — more than 48,000 rocks going back billions of years.
“It turned out that a staggering change occurred in the composition of continents at the same time free oxygen was starting to accumulate in the oceans,” Smit said in a press release. “Oxygenation was waiting to happen,” Smit added. “All it may have needed was for the continents to mature.”
The rock in modern Iceland and the Faroe Islands provides examples similar to what could be found in the continents before oxygenation: rocks rich in magnesium and low in silica. However, the rocks from the past contained the mineral olivine, which initiates oxygen-consuming chemical reactions when it comes into contact with water, locking up oxygen. That is probably what happened early in Earth’s history when cyanobacteria produced oxygen.
As the continental crust evolved to become more like it is today, olivine virtually disappeared and the reaction it initiated stopped, allowing oxygen to accumulate. Once oceans became saturated with oxygen, the gas crossed into the atmosphere.
“It really appears to have been the starting point for life diversification as we know it,” Smit said in the release. “After that change, the Earth became much more habitable and suitable for the evolution of complex life, but that needed some trigger mechanism, and that’s what we may have found.”
Although the cause of the change in the continents remains unknown, Smit notes that modern plate tectonics started at about that time, and many researchers theorize a connection between the events.
This isn’t exactly about evolution, or abiogenesis, but by discovering how the most necessary substance for complex life became ubiquitous, these scientists may have solved a term in the equation for the origin Earth-based life. Such valuable knowledge could also be applied to our search for life beyond the solar system. We already suspect that the two innermost exoplanets of the TRAPPIST-1 system might have vast amounts of liquid water. If (or when) we discern the presence of oxygen, could we deduce the positions and composition(s) of exoplanets’ continents, thus narrowing down a few terms on the far end of the Drake equation?
Now, a team of Canadian and U.S. researchers has taken an all-important step towards making quantum networks more realistic, affordable, and secure. Their work, published in the journal Quantum Science and Technology, explains the benefits of using measurement-device-independent quantum key distribution (MDI-QKD) systems.
The team used commercially available and relatively cheaper components like distributed feedback (DFB) lasers and field-programmable gate arrays (FPGA) to make their experimental system. According to senior author Dr. Qiang Zhou, this MDI-QKD system allows quantum bits (qubits) to generate keys in random states, making them harder to individually identify.
Securing Quantum Networks
Still, making QKD systems completely secure will remain a difficult challenge for a while. As explained by fellow author Raju Valivarthi, the components used to make QKD systems never fully agree with security proofs, allowing those with the technical knowledge to gain access to secure information using “blinding attacks.”
“So-called ‘blinding attacks’ exploit vulnerabilities of single photon detectors (SPDs) to open a side-channel, via which an eavesdropper can gain full information about the (assumed-to-be) secure key,” he said.
Professor Wolfgang Tittel, the research group’s leader, remained optimistic about his team’s work, saying, “our experimental demonstration paves the way for MDI-QKD-based star-type quantum networks with kbps secret key rates spanning geographical distances of more than 100km.”
Making quantum networks more secure may take some time, but it’s an invaluable step forward. Quantum computers, as well as AI, are expected to be our best defense in cybersecurity, and they’ll only remain that way if they’re unable to be compromised. It’s unclear when they’ll be available to consumers, but expect them to change the world in radical ways once they are introduced.
Life remains one of the greatest and most beautiful mysteries of the universe. For one, it continues to baffle scientists that we only seem to have found life here on Earth and nowhere else in the vast expanse of space. For the time being, at least. The origins of life itself, from the simple to the complex, is a story of scientific wonder — one that is still being written today.
In a recent study published in the journal Royal Society Open Science, researchers from the Evolutionary Studies Institute at the University of the Witwaterstrand (Wits University), Johannesburg in South Africa discussed a discovery regarding how complex life evolved on Earth. “Life was a chance event, there is no doubt about that,” researcher Pierre Durand from the Evolution of Complexity Laboratory said in a press release.
This chance event Durand was referring to was that small strands of molecules linked up to form larger molecules capable of self-replication. Through a chemical reaction called ligation, simple RNA molecules join with other RNA molecules thanks to an enzyme they possessed. Supposedly, RNAs randomly connected with each other and replicated, thereby jump starting the process of life. “Molecular trade-offs in RNA ligases affected the modular emergence of complex ribozymes at the origin of life,” Durrand explained.
Life’s Simple Beginnings
In their research, Durrand and his colleagues successfully demonstrated how it’s possible for small, non-living molecules to become larger molecules capable of reproducing themselves. This is a crucial step in a series of many that made it possible for life to evolve over a long period of time. “Something needed to happen for these small molecules to interact and form longer, more complex molecules and that happened completely by chance,” Durand added.
Even more surprising was how the smallest of these simple molecules (a 40-nucleotide RNA) was smaller than what the researchers expected. Nucleotides are the building blocks of nucleic acids, which in turn make up RNA and DNA. “The small molecules are very promiscuous and can join other pieces to themselves,” Durrand explained. “What was interesting was that these smaller molecules were smaller than we had originally thought.”
As our understanding of how complex life came to be continues to evolve, we learn more about what makes life possible. Furthermore, now that we know how complex life came to be on Earth, perhaps we’ll be better equipped to find life elsewhere. Whether complex life started on the oceans — as is widely accepted — or on land, what’s clear is that it started at a particular moment in Earth’s history.
A team of researchers have conducted a study on the human microbiome — the trillions of microbial organisms found on and within the human body. They studied microbial communities originating from the gut, skin, mouth, and other parts of the body, revealing new information detailing how these microbes benefit the body and its health.
Researchers from the University of Maryland School of Medicine (UM SOM), Harvard T.H. Chan School of Public Health, the Broad Institute of MIT and Harvard, and the University of California San Diego collaborated on the study, making it the largest study on human microbial communities to ever take place — it’s the continuation of work started by the National Institutes of Health Human Microbiome Project, which initially launched in 2007.
According to Curtis Huttenhower, associate professor of computational biology and bioinformatics at Harvard Chan School, and senior author on the project, the findings provide “the most detailed information to date about exactly which microbes and molecular processes help to maintain health in the human microbiome.”
1,631 new microbial samples were taken from 265 individuals of various ages. DNA sequence analysis tools were used to identify which organisms were present at different body sites, and if their functions or stability changed over time.
At its conclusion, the team was able to identify microbes belonging to specific strains, observe the biochemical activity that allows microbes to maintain the human body’s health, and compile one of the largest profiles of viruses and fungi residing within microbiomes.
Despite the team’s progress, there are still many things about microbial communities that remain unclear — their complexity, while fascinating, requires more time and research.
“In one sense, this study is a great advancement for the research community,” said Anup Mahurkar, executive director of software engineering & information technology at University of Maryland’s Institute for Genome Sciences. “On the other hand, it still just moves the needle. There will always be more we can learn.”
Our bodies are home to two types of fat, one “good” and the other “bad.” The “good,” brown fat found near our necks and shoulders aids in the process of burning calories, while the “bad,” white fat in areas like our hips, bellies, and thighs simply stores calories.
The researchers found that by genetically engineering mice that didn’t produce a protein called PexRAP, they could turn white fat into beige fat, an intermediary between white and brown fat that functions more like the latter.
The mice without PexRAP in their white fat cells were found to have more beige fat. What’s more, they were leaner than the control animals, even when they were eating the same amount of food.
The researchers hope that the results of their tests on mice could yield better treatments for obesity and diabetes caused by weight gain in humans.
“Our goal is to find a way to treat or prevent obesity,” the paper’s first author, Irfan J. Lodhi, PhD, noted in a press release. “Our research suggests that by targeting a protein in white fat, we can convert bad fat into a type of fat that fights obesity.”
The next challenge is to figure out how this technique can be modified for human usage. More than two-thirds of adults in the U.S. are either overweight or obese, and some 30 million people suffer from diabetes. Blocking the production of the PexRAP protein could potentially make it easier for those people to lose weight.
However, researchers and drug manufacturers will have a few important hurdles to overcome before they can bring a product to market. “The challenge will be finding safe ways to [block the PexRAP protein] without causing a person to overheat or develop a fever, but drug developers now have a good target,” said Lodhi.
Eight months have passed since the fifth Hawaii Space Exploration Analog and Simulation (HI-SEAS) experiment began, and on Sunday, September 17, the crew returned from isolation.
The crew, comprised of four men and two women, were tasked with living in a geodesic dome located near Mauna Loa, one of the remote volcanoes that forms Hawaii. The dome had 1200 square feet of floor space, and small sleeping quarters for each member. A kitchen, laboratory, bathroom with one shower and two toilets, simulated airlock and “dirty work area” were also included.
The mission was meant to simulate Mars, hence the habitat’s rocky, Mars-ish location, and as such, the crew had to wear space suits while outside of the dome, communicate with NASA on a 20-minute delay, and live on canned and freeze-dried food.
While in isolation, the crew played games designed to measure their stress levels and overall compatibility. Logs were kept by each member explaining how they felt throughout the mission. Special sensors were also worn at all times to measure their voice levels and distance to other crew members — this was particularly important to see if members avoided each other, and if they got into arguments.
“There are certainly human factors to be figured out, that’s part of what HI-SEAS is for,” said the team’s information technology specialist Laura Lark. in a video recorded within the dome. “But I think that overcoming those challenges is just a matter of effort. We are absolutely capable of it.”
The data collected will be used to help NASA (which funded the project) pick individuals for a two-to-three-year mission to the red planet. The organization hopes to begin around 2030, and those chosen for the expedition will need to have the right combination of characteristics and skills to better deal with the stress, relative loneliness, and dangers such a mission would entail.
Engineers have modelled the interactions between subatomic components of a complex molecule using a quantum computer, making a significant leap forward in our modelling of chemical reactions.
The simulations were carried out by IBM on superconducting hardware, and this milestone just pushed into new territory for what can be achieved using quantum computing.
The molecule in question was beryllium hydride – or BeH2. It’s not the fanciest molecule in town, but there’s still a lot going on between those two hydrogens and single beryllium for a computer to figure out.
Last year, Google engineers simulated the bonding of a pair of hydrogen atoms on its own quantum computer, demonstrating a proof of principle in the complex modelling of the simplest arrangement of energies in molecules.
Molecular simulations aren’t revolutionary on their own – classical computers are capable of some pretty detailed models that can involve far more than three atoms.
But even our biggest supercomputers can quickly struggle with the exponential nature of keeping track of quantum interactions of each new electron involved in a molecule’s bonds, something which is a walk in the park for a quantum computer.
Achieving Quantum Supremacy
These revolutionary devices have been big news of late, with big players in the information technology world investing heavily in the race for quantum supremacy – the line in the sand where quantum computers become truly practical tools that surpass the power of traditional computing systems.
For a quick 101; quantum computers are devices that use a particle’s binary states in specific kinds of calculations, much like a 1 or 0 in binary code.
Specifically, this property has a blurred in-between state called a superposition, the nature of which can be applied in calculations that would take a classical computer a long time to run through.
This makes quantum computers a big deal for some things, such as finding supersized prime numbers or – as in this case – crunching the numbers on particle interactions within a molecule.
Unlike those solar-system style diagrams your high school chemistry teacher drew on the board, electrons don’t behave like little spheres whizzing around a nucleus.
Instead they exist in a mind-bending state of possibilities that only get more complicated as you add more particles into their surroundings.
This constitutes what’s called a many-body problem in physics, and even just a few particles in one or two dimensions demands some hardcore problem solving.
Usually physicists will find short-cuts. One such simplification, for example, is called the Monte Carlo method, which applies a statistical sampling process to solve rule-based problems.
When it comes to increasing numbers of charged particles, these kinds of short cuts can quickly fall apart.
Having a working quantum computer can potentially provide a neat way to avoid these problems.
The things is, even the latest quantum computers are small and prone to making mistakes.
As cutting edge as it is, the seven qubit device used in this study still relied on delicate states that could only be used in calculations for microseconds, leaving little time for lengthy processes.
The goal was to come up with an efficient algorithm that would describe the arrangement of particles in molecules with three atoms, including lithium hydride and beryllium hydride.
“Our scheme contrasts from previously studied quantum simulation algorithms, which focus on adapting classical molecular simulation schemes to quantum hardware – and in so doing not effectively taking into account the limited overheads of current realistic quantum devices,” IBM researchers explain on their blog.
Ultimately this means we’ll be better prepared for the next generation of quantum computers that tackle bigger molecules.
It’s hoped that one day, we’ll have such detailed solutions to various many-body problems that we’ll be able to predict the interactions of compounds far more accurately, pointing the way to improved drugs or spotting obscure side effects before clinical trials even begin.
Eventually, the sky will be the limit for quantum computers. But even the biggest and best devices will be giant paperweights without the right software to drive them.
Two types of cells can be found in the human body. Germ cells are used to make sperm and eggs, while everything else is created via somatic cells. Germ cells and somatic cells are incredibly different — no surprise given their dramatically different roles in human biology.
Previously, scientists believed that the MIWI2 gene, one of a group of genes that help sperm develop properly, was only expressed in male germ cells. However, the BUSM researchers discovered that not only is MIWI2 expressed in somatic cells in the body, its expression distinguishes a unique family of multi-ciliated cells that line the lungs’ upper airways and help the body fight infection.
“These ciliated cells have hair-like projections that function to sweep mucous and other foreign material out of the lung. However, what sets this new population of ciliated cells apart is that they express the MIWI2 protein and, in this report, were found to have a specialized role in controlling lung infection,” BUSM assistant professor of medicine and corresponding author Matthew Jones explained in a press release.
A Deadly Infection
According to the Centers for Disease Control and Prevention (CDC), 50,000 people die from pneumonia every year in the U.S., and worldwide, it is the leading infectious cause of death amongst children under the age of five. The researchers are optimistic that their findings may prompt new, non-traditional approaches to fighting this potentially deadly infection.
“Pneumonia is a worldwide public health burden and a leading cause of death from infection. Together with the increasing prevalence of antibiotic resistant strains of bacteria, it is now more critical than ever to develop new methods for combating this pathogen,” Jones stated in the release. “It is our hope that we can leverage these molecular insights to develop novel therapeutic strategies.”
The researchers believe the new pattern of gene expression and corresponding cell type may also lead to an improved understanding of the mechanisms behind inflammatory diseases that involve changes in the cellular composition of the airway, such as asthma and chronic obstructive pulmonary disease (COPD). Next steps will include pursuing new investigations of how the lungs react to infectious bacteria.
Researchers have discovered that autophagy genes in worms, which are crucial to the cellular survival process in all species, drive the aging process later in life after promoting health and fitness in young specimens. The research revealed that neuronal and whole body health improved, and longevity with it, when autophagy was shut down in old worms.
This provides proof that the aging process evolved, and offers hope for treating neurodegenerative disorders related to autophagy, like Alzheimer’s, Huntington’s disease, and Parkinson’s.
Although getting old has traditionally been considered an inevitability, in recent years more people have been questioning whether it should be. Since aging would seem to prevent species from multiplying as effectively, aging itself might appear to be an evolutionary contradiction.
However, starting in 1953, aging began to be understood as a predictable result of evolution when the antagonistic pleiotropy (AP) hypothesis was proposed. This theory states that a gene mutation that results in more offspring will be passed on as an advantage, even if it shortens life; the mutation means the carrier will have more descendants to pass on its genes, making up for its shorter lifespan.
Over time, the aging process becomes hard-wired into our DNA, as pro-aging mutations are actively selected for in the evolutionary process.
This theory has finally been backed up by evidence. The study in this case was able to study AP genes in older animals on a large scale. This allowed them to identify 30 genes that promoted aging — a surprisingly large number, and some of the first discovered to cause the process in old test subjects. Moreover, this research tested only 0.05 percent of all of the genes in the subject worms, indicating that there may be many other such genes yet to be identified.
Perhaps even more interesting than the connection between aging and evolution are the other processes the aging genes control. Autophagy — which, earlier in life, eliminates unneeded or dysfunctional cell components — deteriorates over time, and the researchers found that bypassing the process altogether allowed the worms to live longer than allowing it to run while crippled. In other words, the conventional wisdom that autophagy is beneficial, even when barely working, appears to be wrong.
The researchers tracked pro-longevity signals to the neurons and inactivated the autophagy process in the old worms. By doing so, they improved the overall health of the worms, as well as prolonged their lives.
“Imagine reaching the halfway point in your life and getting a drug that leaves you as fit and mobile as someone half your age, who you then live longer than. That’s what it’s like for the worms,” co-lead author Thomas Wilhelm, of the Institute of Molecular Biology in Mainz, Germany, told Medical Xpress. “We turn autophagy off only in one tissue and the whole animal gets a boost. The neurons are much healthier in the treated worms, and we think this is what keeps the muscles and the rest of the body in good shape. The net result is a 50% extension of life.”
Although the researchers do not yet know the precise mechanism that allows the neurons to remain healthier for longer, these findings could represent a way to preserve neuronal integrity in diseases associated with autophagy, such as Alzheimer’s, Huntington’s, and Parkinson’s disease. This kind of treatment will be far into the future, if the findings can be validated in humans, but they offer the possibility of preventing age-related disease and improving health.
In early 2018, NASA will send its Cold Atom Laboratory (CAL), worth $70 million, to the International Space Station (ISS). The fully automated rig will create Bose-Einstein condensates (BECs) and conduct other cold atom experiments in orbit, using the weightless conditions to achieve record-low temperatures and make new discoveries in quantum mechanics and gravity. The orbiting lab will be a collaborative space for atomic physicists all over the world, who will work with instruments that are now miniaturized, making them easily portable.
The primary reason to do atomic physics in space is to achieve a no-gravity environment. Physicists use lasers and magnets to trap and chill atoms, and lower their speeds to centimeters per second to make a BEC. However, to probe a BEC, they illuminate it with a laser, examining its shadow to parse out the atoms’ distribution — a process that requires they release the BEC. On Earth, it only takes 10 to 20 milliseconds for a released BEC to crash inside a vacuum chamber, but a BEC in space hovers inside the chamber for about 10 seconds before it warms up. This vital difference gives scientists plenty of time to take measurements and make observations.
The final step of making a BEC involves trapping the atoms in a magnetic field and allowing the gas to expand and cool. In space, gases can achieve extremely low temperatures without losing atoms. And while weightless conditions have been attempted on Earth, in space researchers will be able to do considerably more.
Probing Quantum Mysteries
CAL physicists will of course be trying to reach the lowest possible temperatures, which could produce new quantum effects. According to University of Virginia physicist and CAL experimenter Charles Sackett, researchers are confident they can achieve temperatures of 100 picokelvins and possibly lower. Other CAL researchers hope to create hollow BEC shells, which is impossible in conditions with gravity.
Still other CAL team researchers plan to probe the Efimov effect, a quantum effect that enables some atoms that won’t bind in pairs to form weakly-bound, three-atom molecules. These three-atom molecules are like Borromean rings at the atomic level, existing in intertwined groups; removing a single ring causes the others to collapse. And while the effect has been seen on Earth, a much stronger Efimov effect as well as the second Efimov state should be observable in space.
Atom interferometry is one of the CAL team’s biggest goals. Quantum effects cause the quantum wave of a BEC halved by a laser light to take two paths at the same time. The ways the paths split and recombine can tell scientists how much gravity causes atoms to accelerate as they orbit Earth. An atom interferometer in orbit could have numerous scientific applications: in spacecraft, as more accurate navigation systems, and to test both the “equivalence principle” and general relativity.
The CAL team will join forces with the German QUANTUS team, from the Center of Applied Space Technology and Microgravity, on an atom interferometry project to be called BECAL which will launch in 2020 or 2021. Eventually, physicists focusing on cold atoms hope to launch a dedicated satellite mission to achieve higher sensitivity and precision.
For the first time, digital information in the form of light waves has been converted into sound waves and placed into a microchip. The conversion process slows down the stored information, making it easier to manipulate within photonic circuits, which are being developed for use in the microchips, which control data using light instead of electrons.
This world-first was accomplished by a team of researchers from the University of Sydney, led by doctoral candidate Moritz Merklein and Dr. Birgit Stiller, with their research published in the journal Nature Communications. Merklein and Stiller are from the ARC Centre of Excellence for Ultrahigh bandwidth Devices for Optical Systems (CUDOS) and the team used the Australian National University’s Laser Physics Centre to design the microchip.
Light is extremely useful when it comes to moving information across a great distance, but its unbeatable speed is also a detriment, as it makes it difficult for computer and telecommunication systems to process the stored information. Speed isn’t useful if the information can’t be processed
This is why sound waves, or rather the conversion from light to sound waves, work so well here. This process slows the information down long enough for it to be processed, before it’s converted back into light waves and sent on its way.
“The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Stiller. “It is like the difference between thunder and lightning.”
Traditional electronic devices used in telecommunications and optical fiber networks are vulnerable to interference and are capable of producing excessive heat and using too much energy. Implementing light and sound waves on a photonic microchip eliminates these problems — photons are immune to electromagnetic interference, and there is no electronic resistance to produce heat. Additionally, with this process, overall bandwidth is increased and data can always travel at light-speed.
“Our system is not limited to a narrow bandwidth,” added Stiller. “So unlike previous systems this allows us to store and retrieve information at multiple wavelengths simultaneously, vastly increasing the efficiency of the device.”
CUDOS director and professor Benjamin Eggleton, who co-authored the research, said that this work marks an important step forward in optical information processing, saying, “This concept fulfills all requirements for current and future generation optical communication systems.”
Computer systems are only going to continue getting bigger and faster, but the amount of heat that advanced devices create makes them difficult to use and maintain. The successful development of photonic chips, however, has prompted companies like IBM and Intel to explore the possibility of using such chips in their systems. Hopefully, as computers get more powerful, other companies like Microsoft will eventually do the same if only to avoid having inferior hardware.
A team of researchers at Wits University in Johannesburg, South Africa have made a major breakthrough in the field of biomedical engineering. According to a release published on Medical Express, for the first time ever, researchers have devised a way of connecting the human brain to the internet in real time. It’s been dubbed the “Brainternet” project, and it essentially turns the brain “…into an Internet of Things (IoT) node on the World Wide Web.”
The project works by taking brainwave EEG signals gathered by an Emotiv EEG device connected to the user’s head. The signals are then transmitted to a low cost Raspberry Pi computer, which live streams the data to an application programming interface and displays the data on an open website where anyone can view the activity. Adam Pantanowitz, a lecturer in the Wits School of Electrical and Information Engineering and the project’s supervisor, said:
Brainternet is a new frontier in brain-computer interface systems. There is a lack of easily understood data about how a human brain works and processes information. Brainternet seeks to simplify a person’s understanding of their own brain and the brains of others. It does this through continuous monitoring of brain activity as well as enabling some interactivity.
Pantanowitz said this is just the beginning of the possibilities of the project. He adds that the team is now aiming to allow for a more interactive experience between the user and their brain. Some of this functionality has already been built into the site, but it is very narrow — limited to stimulus such as arm movement. “Brainternet can be further improved to classify recordings through a smart phone app that will provide data for a machine-learning algorithm. In future, there could be information transferred in both directions – inputs and outputs to the brain,” Pantanowitz said.
Future applications for this project could lead to some very exciting breakthroughs in machine learning and brain-computer interfaces like Elon Musk’s Neural Lace and Bryan Johnson’s Kernel. Data collected from this project could lead to a better understanding of how our minds work and how we can take advantage of that knowledge to augment our brain power.
Researchers at the University of Washington have developed a long-range backscatter system that can transmit data up to 2.8 kilometers (1.7 miles) using a very small amount of power — apparently up to 1000 times less power than existing technologies that perform a similar function.
Reflected or “backscattered” radio signals can be transmitted without using a great deal of power. However, this technique makes it harder for the receiver to differentiate the reflections from the original signal and other, unrelated noise.
The team addressed this limitation using a form of modulation called chirp spread spectrum. With the reflections spread across multiple frequencies, it’s possible to receive the backscattered signals at long distances, even despite interference from other sources.
A Little Power Goes a Long Way
This technology could offer some major advantages for devices that need to communicate information wirelessly, but don’t have space for a bulky battery, or another power source. One application is outlined in a case by the product’s creator, called flexible electronics.
The nature of a flexible device — like wearable patches that monitor body sweat or the range of motion on a joint — means that there’s no space for a battery pack. This system would allow them to beam their readings back while operating using an ultra-thin power source.
The researchers created an epidermal patch and a prototype contact lens that were outfitted with the technology, and managed to transmit data across a 3,300 square-foot room. The team has started a company called Jeeva Wireless to manufacture sensors utilizing the system, and expect to produce them at a bulk cost of just 10 to 20 cents per piece.
Humanity has so much to gain from a map of the human brain’s neural connections, but while producing that map sounds simple enough in theory, it’s extremely difficult in practice. Our brains have some 100 billion neurons making 100 trillion connections, and mapping this massive amount of information requires that we pinpoint the exact location of every single neuron before we can even hope to understand how they communicate with one another.
Researcher Doga Gursoy is an assistant computational scientist at Argonne’s Advanced Photon Source (APS), while the other researcher, Bobby Kasthuri, is an Argonne neuroscientist. The goal of their joint project is to fully map the human brain, but first, they’re starting relatively small by mapping the 75 million neurons in the brain of an adult shrewmouse via a combination of data mining and machine learning.
Enabling this process is Argonne’s new 9.65 petaflops Intel-Cray supercomputer, Theta. Housed in the Argonne Leadership Computing Facility (ALCF), Theta is equipped with an advanced and flexible software platform designed to work with big data problems.
“The basic goal is simple — we would like to be able to image all of the neurons in the brain — but the datasets from X-rays and electron microscopes are extremely large. They are at the tera- and petabyte scales,” Gursoy said in an ALCF press release. “So we would like to use Theta to build the software and codebase infrastructure in order to analyze that data.”
To map the deepest levels of neural connectivity and activity in the tiny mouse brain, Gursoy and Kasthuri first need to produce the data that Theta will later analyze. This will be accomplished using Argonne’s electron microscopes and a high-resolution 3D imaging technique called X-ray microtomography.
Once the high-resolution images are taken, they can be fed to Theta. The analysis produced by Theta’s supercomputing powers will provide an image of the mouse brain in a depth never before explored.
“Machine learning will go through these datasets and help come up with predictive models. For this project, it can help with segmentation or reconstruction of the brain and help classify or identify features of interest,” explained Venkat Vishwanath, ALCF Data Sciences Group lead.
The Big Picture
After they complete their analysis of the adult shrewmouse’s brain, Gursoy and Kasthuri plan to move on to the brain of larger mouse, and then maybe a non-human primate. While the technique won’t be ready for the complexity of a human brain for some time now, once it is, the implications could be tremendous.
A map of every connection in the brain could help us understand identify the small changes that can signal the development of neurological disorders, such as Alzheimer’s or autism. Such knowledge of the brain’s wiring could also lead to the development of better treatments for such disorders.
Furthermore, this level of brain mapping could assist us in more futuristic endeavors, such as the engineering of neurons themselves through synthetic biology or the creation of advanced brain-computer interfaces. It could also lead to the development of better artificial neural networks, which are designed to mimic how the human brain works.
Ultimately, as Vishwanath points out, the ALCF team’s research could inspire other projects that take advantage of supercomputing for seemingly impossible endeavors:
“By developing and demonstrating rapid analysis techniques, such as data mining, graph analytics, and machine learning, together with workflows that will facilitate productive usage on our systems for applications, we will pave the way for more and more science communities to use supercomputers for their big data challenges in the future.”
Chemists are already able to engineer molecules designed for the purpose of extracting other types of molecules, such as radioactive elements and chemical pollutants, from solutions. These “receptor molecules” consist of atoms arranged in a specific shape that leaves room for the target to attach itself and break away from its host.
However, these molecules are designed in laboratory conditions, so accurately predicting whether they will work as intended in “real world” solutions is difficult. The space intended for the target can be distorted or filled by other molecules within the solution, rendering it unusable for its intended purpose.
The IU researchers theorized that the environmental solution’s dielectric constant — a measurement of its ability to stabilize electrical charge — determined the level of attraction between receptor molecules and negatively charged ion molecules. When they tested this theory using the molecule triazolophane to extract chloride from solutions, they found they were able to accurately predict the engineered molecule’s effectiveness.
As primary researcher Yun Liu noted in a press release, “The current paradigm only works for molecular designs on the drawing board, in theory, but we want to make molecules that will work in practice to help solve problems in the real world.”
Truly, receptor molecules have almost innumerable real-world applications. They can be used to remove chloride from salt water as part of the desalination process, chemical pollutants from soil, or lithium ions from water for use in renewable energy systems.
Safety is always a concern when it comes to nuclear power, and while safe methods for disposing of nuclear waste have already been established, the process can be complicated and costly. Using this new principle, chemists will be able to make more informed decisions when they design receptor molecules to remove radioactive elements from nuclear waste.
These improved designs could significantly reduce the volume of those radioactive materials and lead to improved storage methods for them, thus making nuclear energy an even more attractive alternative to environment damaging fossil fuels.
More than 60 years ago, researchers first identified the part of the brain that controls the feeling of thirst. Now, a team of scientists has identified the exact neurons responsible for telling our bodies when we need to reach for a beverage.
Led by Stanford neuroscience researcher William E. Allen, the team made this discovery by first depriving mice of water for 48 hours. Then, they analyzed the RNA expression in the animals’ median preoptic nucleus (MnPO), the area of the brain previously linked to thirst. Through this analysis, the researchers were able to narrow down their area of interest to specific clusters of neurons within a particular region of the brain.
Next, the researchers focused on how manipulating these neurons would affect water consumption in the mice. When they inhibited the activation of the neurons via optogenetics (the use of light to control genetically modified cells) they noticed that the mice decreased their water consumption. Conversely, when the neurons were activated, water consumption increased.
Some of the mice in the study were trained to access water by pressing a lever. The researchers noticed that when the neural activity in these mice decreased — a sign they weren’t thirsty — so did the rate at which they pressed the lever.
Meanwhile, when mice were given the opportunity to stop the photoactivation of their MnPO neurons by pressing the lever, they did so “vigorously” — expressing a strong desire to free themselves from their feeling of unquenchable thirst.
The Next Level of Humanity
Thirst is one of our most basic drives. If we don’t ingest the appropriate amount of water, our bodies can become dehydrated, causing our biological systems to shut down. Ultimately, we can die of thirst. Conversely, too much water can be problematic as well, causing the cells in our bodies to swell and potentially leading to seizures, coma, and death.
This new research from Stanford could lead to treatments that help people suffering from conditions such as adipsia (a lack of thirst) and polydipsia (an unquenchable thirst) make up for their bodies’ inability to naturally maintain appropriate hydration levels.
Even more broadly, this research tells us something about the brain that we didn’t previously know. Despite the vast amount of research in the field of neuroscience, the brain is still largely a mystery, and each new discovery puts us one step closer to unlocking its secrets.
Not only could a more complete understanding of the brain lead us to better treatments for psychological and neurological disorders, it could also help us reach the next level in human evolution — a time when we navigate virtual reality (VR) through thought alone, control bionic devices with our minds, and access the digital world via brain implants.
Crops and vegetables were among the first to be used in testing how effective CRISPR-Cas9 can be. Now, researchers from Cold Spring Harbor Laboratory (CSHL) in New York have used today’s most efficient gene editing tool to push the envelope of agricultural crop yield. In their study, published in the journal Cell, the CSHL scientists developed a method to edit the genome of tomatoes using CRISPR.
Specifically, the researchers edited trait variations or major components known to affect yield rates in crops. These traits include the size of the fruit, its branching architecture, and the overall shape of the plant. They used CRISPR-Cas9 to make multiple cuts inside three genome sequences in tomatoes. These sequences are called promoters, which are DNA areas close to the genes that regulate when, where, and at what level the actual “yield” genes become active. The CSHL scientists were able to induce a wide range of changes in the three targeted traits mentioned by introducing multiple sets of mutations on the promoter gene sequences.
“What we demonstrated with each of the traits was the ability to use CRISPR to generate new genetic and trait variation that breeders can use to tailor a plant to suit conditions,” lead researcher and CSHL professor Zachary Lippman said in a press release. “Each trait can now be controlled in the way a dimmer switch controls a light bulb.”
Editing Our Way Through Hunger
Some might ask why target regulatory sequences instead of the actual “yield” genes. Well, the researchers found that this method yielded significantly better results. They were able to achieve a subtler impact on the quantitative traits. “Traditional breeding involves great time and effort to adapt beneficial variants of relevant genes to the best varieties, which must continuously be improved every year,” Lippman explained.
“Our approach can help bypass this constraint by directly generating and selecting for the most desirable variants controlling gene activity in the context of other natural mutations that benefit breeding,” he added. “We can now work with the native DNA and enhance what nature has provided, which we believe can help break yield barriers.”
This isn’t the first time CRISPR has been used to improve crop yield. Indeed, the agricultural applications of CRISPR not only obvious. According to CRISPR-Cas9 developer Jennifer Doudna, agricultural use is greatly needed. Crops genetically modified using CRISPR could hypothetically solvet the world hunger. This new CSHL method, which can be used in all food, feed, and fuel crops — staples such as rice, maize, sorghum, and wheat — can definitely contribute.
“Current rates of crop yield increases won’t meet the planet’s future agricultural demands as the human population grows,” said Lippman. “One of the most severe limitations is that nature hasn’t provided enough genetic variation for breeders to work with, especially for the major yield traits that can involve dozens of genes. Our lab has now used CRISPR technology to generate novel genetic variation that can accelerate crop improvement while making its outcomes more predictable.”
Scientists are increasingly using DNA to build structures at the tiniest scales, a process sometimes called DNA origami. Because DNA can store a lot of information, structures built out of it can be used to deliver medicines throughout the body or transport gene editing tools. Now, researchers have used DNA to create a new type of robot designed to move and lift cargo at the smallest scales. Their research has been published in Science.
“Just like electromechanical robots are sent off to faraway places, like Mars, we would like to send molecular robots to minuscule places where humans can’t go, such as the bloodstream,” Lulu Qian, a bioengineering professor at the California Institute of Technology and one of the study’s authors, explained in a press release. “Our goal was to design and build a molecular robot that could perform a sophisticated nanomechanical task: cargo sorting.”
The researchers wanted to create robots that could pick up and sort molecules within a designated space. To that end, they created a three-part DNA robot out of a single strand of DNA. To help it maneuver within the designated space, the robot had a “leg” with a pair of feet. An “arm” with a “hand” allowed it to carry cargo, and a third component was added to tell the hand when a specific drop-off point had been reached so it would know to release the cargo.
To test their robot, the researchers created a board featuring a grid of single DNA strands to which their robot could bind with one foot. Meanwhile, the other foot would swing freely until it found another strand on the board. Moving in this manner, the robot needed five minutes to take one six-nanometer step.
During the course of its excursion, when the robot ran into cargo — in this case, fluorescent yellow and pink dye molecules — it knew to pick it up and deliver it to a designated location depending on the color. The robot was designed to continue this process until all the cargo was sorted, and after 24 hours, the tiny bot had successfully sorted three pink and three yellow molecules.
After several different iterations, the researchers created robots that had an 80 percent chance of accurately delivering their cargo.
Tiny Robots, Big Applications
According to the researchers, their nanobots can be configured in many different ways. Several bots can work in the same area simultaneously without interfering with the others’ tasks or they could be designed to interact with one another. In order for the robots to be useful, though, they need to work a bit faster. The researchers have suggested that adding “tails” or motors to the robots could speed them up.
The major advance is in the researchers’ methodology, according to John H. Reif, a computer science professor at Duke University. “Similar systems should be able to perform more complex tasks, such as executing chemical synthesis in parallel,” he asserted in an accompanying editorial for Science.
Though Qian noted in the press release that she and her team didn’t create the robots with any specific purpose in mind, she thinks other researchers will find applications for them.
“[I]t is my hope that other researchers could use these principles for exciting applications, such as using a DNA robot for synthesizing a therapeutic chemical from its constituent parts in an artificial molecular factory, delivering a drug only when a specific signal is given in bloodstreams or cells, or sorting molecular components in trash for recycling,” said Qian.
This could be particularly useful during our exploration of the cosmos as astronauts on long intergalactic flights could have these bots in their bloodstreams, remaining inactive until they receive the signal telling them to release a needed medication.
A team of researchers from the Australian National Univeristy (ANU) have made a significant achievement that could bring quantum computing and the much anticipated quantum internet closer to reality. In a study published in the journal Nature Physics, the team led by ANU Research School of Physics associate professor Matthew Sellars found a way to extend the data storage time of quantum systems using crystals treated with a rare-Earth element called erbium.
“We have shown that an erbium-doped crystal is the perfect material to form the building blocks of a quantum internet that will unlock the full potential of future quantum computers,” Sellars said in an ANU press release. “We had this idea 10 years ago, but many of our peers told us that such a simple idea couldn’t work. Seeing this result, it feels great to know that our approach was the right one.”
The so-called building blocks of quantum computers and a quantum internet are quantum bits (or qubits), which are entangled particles that can carry information simultaneously as both ones and zeroes. Compared to binary bits in conventional computers, which can only be a 0 or a 1 at a given moment, qubits allow for processing significantly more information faster. The challenge has been in prolonging the entangled state, and thereby extending the length of time data can be stored.
A Global Quantum Network
Sellars and his team approached the problem from an engineering perspective. By using erbium crystals, with their unique quantum properties, the ANU team were able to successfully store quantum information for 1.3 seconds. That’s a quantum memory that’s 10,000 times longer compared to other efforts. Plus, it eliminates the need for a conversion process since the erbium crystals operate in the same bandwidth as current fiber optic networks.
“At the moment researchers are using memories that don’t work at the right wavelength, and have to employ a complicated conversion process to and from the communications wavelength,” ANU researcher Rose Ahlefeldt explained. “This can be inefficient, and means they have to do three very difficult things instead of just one.”
While we’ve already demonstrated long-distance quantum entanglement, extending the length of time data can be stored in a quantum memory is important in perfecting quantum communication, which is crucial in the development of a quantum internet. This kind of quantum network promises faster information transfer, as well as “hack-proof” communication because it uses a type of encryption that keeps messages and information secure via a quantum key. In short, tampering with messages sent through a quantum internet is nearly impossible.
“The effort to build a quantum computer is often described as the space race of the 21st century,” Sellars said. Prolonging quantum memory is crucial in that race, and their technology can help do that. Plus, it can also be used to connect many types of quantum computers. Sellars added that it “will allow us to build a global network to connect quantum computers” — i.e., a quantum internet.
For the first time, astronomers have detected titanium oxide in the atmosphere of an exoplanet. The observation was made using the FORS2 instrument on the ESO’sVery Large Telescope (VLT), and it provides unique insight into the chemical composition of the exoplanet WASP-19b, as well as valuable information about the pressure and temperature structure of its atmosphere.
WASP-19b is a very warm and truly unusual “Hot Jupiter” about 1,000 light-years from Earth. It has about the same mass as Jupiter, but because it is so close to its parent star, it completes an orbit in just 19 hours. This also makes it very hot — the temperature of its atmosphere is estimated at about 2,000 degrees Celsius (3,632 degrees Fahrenheit).
When WASP-19b passes in front of its star, some starlight passes through its atmosphere, leaving “fingerprints” in the light that reach Earth eventually. The team of astronomers was able to analyze these “fingerprints” with the FORS2 instrument on the VLT and deduce that along with a strongly scattering global haze, the atmosphere contained small amounts of water, sodium, and titanium oxide.
Rarely seen on Earth, titanium oxide is known to exist in the atmospheres of cool stars. In the atmospheres of a hot planet like WASP-19b, it absorbs heat. In fact, if enough titanium oxide is in an atmosphere, it will prevent heat from escaping or entering. This causes thermal inversion, with the upper atmosphere warmer than the lower atmosphere. Ozone has a similar effect on Earth’s atmosphere, causing inversion in the stratosphere.
Diversity of Alien Planets
The researchers spent more than a year collecting their data on WASP-19b, comparing their observations with atmospheric models to extrapolate the different properties of the exoplanet’s atmosphere. Now, this new data about the presence of titanium oxide, other metal oxides, and other substances can be used to develop improved models of exoplanet atmospheres.
Once astronomers are able to observe the atmospheres of planets that show a higher potential for habitability, these improved models can help them interpret those observations more effectively. These findings also confirm that the refurbished FORS2 instrument is the best tool to handle these kinds of studies from Earth.
As of July 2017, researchers have confirmed the existence of 3,500 exoplanets, with the tally of Earth-like candidates just under 300. But the search for planets that are potentially habitable for humans is far different than the search for alien life in any form.
The very diversity of the exoplanets we have discovered simply reinforces the likelihood that alien life may be nothing like life on Earth — even if it does exist on a rocky planet not so different from ours. For example, what life might inhabit rocky exoplanets whose atmospheres are dominated by some other biosignature gas or gases instead of oxygen and nitrogen?
NASA is searching for signs of such planets right now, and as long as we remain open-minded in our observations, we’ll have the best chance of finding what we’re looking for.
Your brain’s ability to help you navigate your favorite grocery store isn’t as simple as you may think. In fact, it first involves a series of complex calculations that result in a “map” the brain can refer to in the future.
This new discovery comes from Aaron Wilber, assistant professor of psychology and neuroscience at Florida State University. He wanted to better understand how a person goes from seeing an area to creating a mental image used for navigation.
“We have not had a clear understanding of what happens when you step out of a subway tunnel, take in your surroundings, and have that moment where you instantly know where you are,” Wilber explained in a press release. “Now we’re getting closer to understanding that.”
A part of the brain known as the parietal cortex is at the center of this research. It uses the various senses to gather information, and that information is then referenced to determine which actions a person should take. These resulting actions are “recorded” and turned into a memory, which acts as a map the brain can use to get from one familiar place to another.
By recording activity in a rat’s brain as the animal performed certain actions, Wilber’s team discovered that clusters of cells — and not just individual cells —work together to form these map memories. When the same action was performed later, the same patterns of activity were observed.
“These different modules are talking to each other and seem to be changing their connections just like single cells change their connections,” Wilber explained. “But now we’re talking about large groups of cells becoming wired up in different ways as you learn and remember how to make a series of actions as you go about your day-to-day business.”
Dreams and Alzheimer’s
Wilber’s team also uncovered something interesting about dreams through the course of their research. When they recorded the activity in the rat’s brain while it slept, they discovered that the rat replayed the same actions and patterns while dreaming, only at a rate nearly four times faster than the one observed during its waking hours.
“We think these fast-forward ‘dreams’ we observe in rats could explain why in humans when you dream and wake up, you think a lot more time passed than actually has because your dreams happen at high speed or fast forward,” said Wilber. “Maybe dreams happen in fast forward because that would make it easier to create new connections in your brain as you sleep.”
More work needs to be done before we can fully understand how dreams factor into our ability to remember past actions. Thankfully, Wilber recently received funding from the National Institutes of Health, which he plans to use to investigate why the parietal cortex’s ability to function is less effective in patients with Alzheimer’s and other neurological diseases. Ultimately, Wilber’s research, as well as that of others, could lead to better treatment for the nearly 5 million people in the U.S. alone who suffer from Alzheimer’s.
A team of researchers at Northeastern University have developed a new sensor powered by the very infrared energy it’s designed to detect. The device, which was commissioned as part of DARPA’s Near Zero Power RF and Sensor Operation (N-ZERO) program, consumes zero standby power until it senses infrared (IR) wavelengths.
Energy from this source heats the sensing elements, prompting mechanical movements that close a circuit and beam out signals confirming that the IR wavelengths have been detected. A grid of nanoscale patches forms a logic circuit, with each patch being constructed with specific dimensions corresponding only to the wavelengths it’s designed to detect.
Consequently, the circuit can be altered to analyze various IR spectra. It’s possible for the sensor to simply detect IR wavelengths in its surroundings, or determine whether the energy that has activated it came from a fire, a person, or some other source.
Wake Me When I’m Needed
A sensor like this is advantageous because it can be placed absolutely anywhere, without the need for a power supply, or access to solar energy. Such versatility sounds like wetware of William Gibson’s Neuromancer, so it’s no surprise it’s fallen into DARPA’s cross-hairs.
The sensor is capable of detecting vehicles emitting energy via the IR spectra produced by exhaust fumes. Furthermore, since engines that burn gasoline or diesel fuels produce emissions made up of different chemical compounds, it’s able to detect whether the vehicle that it’s “seeing” is a truck, a car, or an aircraft.
“The capability of consuming power only when useful information is present will result in nearly unlimited duration of operation for unattended sensors deployed to detect infrequent but time-critical events,” states the paper, which was published this week in Nature Nanotechnology.
The decomposition of a human can be affected by many different factors, like moisture, temperature, and insects, not to mention the diverse aspects of the body itself. When forensic scientists are called in to identify a cadaver, or to help solve a murder, body farms can be a great help.
A body farm is an outdoor laboratory where researchers can delve into taphonomy, the study of what happens to an organism after its death. Donated human cadavers are used in experiments that explore the different processes that occur as the body decomposes.
There are several body farms in the U.S., and one in Australia. There is also a push to create a facility in the UK, where similar research is conducted using pig remains — which was also the case in the U.S. until 1972. But, while there are many useful similarities between human and pig biology, there are also many differences which raise questions about how accurate these studies can be.
“Much of what we know about human decomposition was discovered in US body farms,” said Dr. Anna Williams from the University of Huddersfield, who presented on the need for a body farm in the UK at the British Science Association’s annual Science Festival.
The More You Know
Decomposition generally follows a similar process no matter what conditions a body is in. However, the timing of this process can change significantly based on everything from the amount of moisture in the surroundings to the person’s diet.
One way to understand more about how these factors influence decomposition is to observe a body decay in a controlled environment.
Last year, a study of decomposing bodies outlined a method of establishing a time of death by understanding how and when certain bacteria come and go from a fresh cadaver. Current techniques tend to focus on insect colonization, but this can be unreliable as weather conditions can influence insect behavior.
Body farms can also be used to teach dogs how to track the scent of a dead body, and in experiments studying how decomposition affects facial features, fingerprints, and how far into the process DNA can be successfully recovered.
There are plenty of ways that body farms can help push the field of forensic science forward towards a future where a greater number of facilities would support these advances. However, there are critics who would argue that there are better ways to perform this research.
It’s easy to see why members of the public wouldn’t necessarily want a body farm in their back yard. Concerns ranging from a site’s potential to lower nearby property values to its capacity to attract vultures and coyotes have been raised in regards to body farms in the U.S.
There have also been questions about how effective these farms are in terms of research. Human cadavers are obviously more relevant to investigations than pig cadavers, but animal corpses are much easier to get access to. Experiments carried out at body farms have rarely used more than three or four bodies, which has led some scientists to consider their findings anecdotal, rather than authoritative.
However, even if body farms aren’t a perfect solution, they do offer insight that we can’t get from anywhere else. In the future, there might be a way to accurately model the decomposition process so that it can be studied without the need for actual cadavers, but for now, body farms are a useful — if a little unsettling — resource.
An international team of scientists, including paleontologists from the University of Manchester, have dug up what could be the fossils of the earliest complex life forms on Earth. In a study published in the journal Nature Ecology & Evolution, the researchers said they’ve uncovered a set of trace fossils — tracks and burrows left by organisms capable of movement — in sediments located in the Corumbá region of western Brazil.
“This is an especially exciting find due to the age of the rocks – these fossils are found in rock layers which actually pre-date the oldest fossils of complex animals – at least that is what all current fossil records would suggest,” researcher Russell Garwood from Manchester’s School of Earth and Environmental Sciences said in a press release.
While the fossils aren’t exactly physical remains or body parts, the findings are still telling. These trace fossils date back some 541 million years ago, from a period of transition between the Ediacaran and Cambrian Periods. The latter is that moment in the Earth’s biological history when complex life exploded. Plus, the burrows were actually just 50 to 600 micrometers or microns in diameter, suggesting that these early complex organisms were only about the size of a human hair strand.
“The evolutionary events during the Ediacaran–Cambrian transition are unparalleled in Earth history. That’s because current fossil records suggests that many animal groups alive today appeared in a really short time interval,” Garwood explained.
C’est La Vie
Advances in modern technology have allowed us to do this. With the new study, for instance, the researchers used a process called X-ray microtomography to build 3D computer models of the trace fossils without damaging the original burrows. To ascertain the exact age of these creatures, the researchers used a DNA studies approach that traces an organism’s evolution from a common ancestor called “molecular clock.” Lead author Luke Parry from the University of Bristol said this makes their discovery important, as it “highlights an unexplored window for tracking animal evolution in deep time.”
It would seem then that organisms capable of movement came about earlier than previously thought. Indeed, our understanding of how life came to be on Earth continues to evolve, as researchers find more evidence that suggests life could have originated on land and not just in the oceans, as well as fossils providing clues as to how complex life may have originated. Not only that, understanding the origins of complex life on Earth also better equips us to discover complex life beyond Earth.
“Our new fossils show that complex animals with muscle control were around approximately 550 million years ago, and they may have been overlooked previously because they are so tiny,” Parry continued. “The fossils that we describe were made by quite complex animals that we call bilaterians. These are all animals that are more closely related to humans, rather than to simple creatures like jellyfish. Most fossils of bilaterian animals are younger, first appearing in the Cambrian period.”
In a study published in the journal Nature Chemistry, the researchers showcased a purely chemical technique for gene assembly. It uses an efficient and rapid-acting chemical reaction called click chemistry that puts together multiple modified DNA fragments into a gene — a process called click DNA ligation.
In 2015, three attorneys for the Legal Aid Society of New York decided to educate themselves on a DNA analysis program, developed by the New York City medical examiner’s lab, which was being used against one of their clients in an attempted murder case. After speaking to several scientists, the attorneys were astonished to hear their lack of confidence in the method.
That program, known as the Forensic Statistical Tool (FST), is now officially under fire, alongside a method called “Low Number Copy Analysis” that has been used exclusively in New York for several years. In order to determine if either method led to inaccurate results during their use, in a letter from September 1st, the Legal Aid Society and Federal Defenders of New York asked the New York State Inspector General’s Office to investigate both methods for accuracy.
This letter was first reported by The New York Times and ProPublica.
At the present juncture, both of these techniques have been phased out in favor of newer technologies; however, within the last six years, FST was used in 1,350 cases. In the past 11 years, Low Copy Number Analysis was used in some 3,450 cases. The attorneys emphasized that inaccuracies from the techniques themselves, as well as suspected misuse of the methods, could have led to false convictions or unresolved cases.
This is a bit of a problem.
The Science Behind the Controversy
FST was developed to sort through evidence that has already gone through DNA analysis, including some created through Low Copy Number Analysis, and calculate the likelihood that a suspect’s DNA is present in samples taken from a crime scene. Because the program is relatively untested—its creators have declined to share its code with outside sources—there are concerns that it could have been turning up false allele matches, or missing them entirely.
Low Copy Number Analysis is meant to create a complete DNA profile from a tiny amount of genetic material—as little as 10 trillionths of a gram—by copying fragments found at a crime scene into a larger sample. This “amplifies” the material so that it can be tested as a whole. However, this technique carries a higher risk of contamination; critics say it could even implicate people who were never at the crime scene. Legal Aid attorney Julie Fry told the Associated Press that this technique is “like making a copy of a copy of a copy. Eventually, it’s going to be faded.”
Yet, not everyone feels that the methods are not sound. For example, in 2015, a judge in Queens found low copy number analysis to be scientifically sound, and some prosecutors and forensic experts continue to praise it as a powerful tool. Dr. Barbara Sampson, Chief Medical Examiner, wrote in a post published Wednesday (September 6, 2017) that the two methods were phased out to satisfy developments in FBI requirements and were not phased out because of inaccurate results: “Each technique has been intensively reviewed and approved for use in casework by the state-established oversight agency, including a panel of distinguished scientific experts, the DNA subcommittee of the New York State Commission on Forensic Science.”
Still, in 2015, Brooklyn state Supreme Court judge Mark Dwyer ruled DNA evidence obtained through the method to be inadmissible after the Legal Aid attorneys presented what they had learned from conversations with scientists. His reasoning was that “If the experts in the DNA field cannot agree on the weight to be given to evidence produced by high sensitivity analysis, it would make no sense to throw such evidence before a lay jury.”
Now, the letter from the Legal Aid Society and Federal Defenders of New York is reigniting an old debate and raising new questions about individuals who may have been misidentified.
Although most life on Earth relies upon photosynthesis as its source of energy, the process has a weak link: chlorophyll. Plants and other organisms use the green pigment to harvest sunlight during photosynthesis, but it is rather inefficient. To that end, scientists have been searching for ways to upgrade natural photosynthesis so humans would be able to capture and use energy from the Sun more efficiently.
Now, Kelsey K. Sakimoto, a researcher at Peidong Yang’s lab at the University of California, Berkeley, has come up with a new alternative to natural photosynthesis: cyborg bacteria that were trained to cover themselves in solar panels that are much more efficient than chlorophyll at converting sunlight into useful compounds.
“Rather than rely on inefficient chlorophyll to harvest sunlight, I’ve taught bacteria how to grow and cover their bodies with tiny semiconductor nanocrystals,” Sakimoto said in a press release. “These nanocrystals are much more efficient than chlorophyll and can be grown at a fraction of the cost of manufactured solar panels.”
To produce his cyborg bacteria, Sakimoto fed them the amino acid cysteine and the element cadmium. The bacteria then synthesized cadmium sulfide (CdS) nanoparticles, which efficiently absorb light, functioning as solar panels on the bacteria’s surfaces. The new hybrid organism, called M. thermoacetica-CdS, produces useful acetic acid from light energy, water, and CO2 at a rate that outperforms any sources of natural photosynthesis.
Sakiomoto’s bacteria, however, operate at an efficiency of more than 80 percent and are both self-replicating and self-regenerating, making this a zero-waste technology with multiple uses. “Once covered with these tiny solar panels, the bacteria can synthesize food, fuels, and plastics, all using solar energy,” he explained. “These bacteria outperform natural photosynthesis.”
While he does acknowledge that more research is needed, Sakiomoto is hopefully that his cyborg bacteria could prove to be a viable alternative to fossil fuels, helping the world produce energy more cheaply and cleanly.