miércoles, 27 de diciembre de 2017

Scientists Develop A Battery That Can Run For More Than A Decade

Credit: Harvard University



Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new flow battery that stores energy in organic molecules dissolved in neutral pH water. This new chemistry allows for a
  • non-toxic,
  • non-corrosive battery
  • with an exceptionally long lifetime and
  • offers the potential to significantly decrease the costs of production.
The research, published in ACS Energy Letters, was led by Michael Aziz, the Gene and Tracy Sykes Professor of Materials and Energy Technologies and Roy Gordon, the Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science.

A Neutral pH Aqueous Organic–Organometallic Redox Flow Battery with Extremely High Capacity Retention
Eugene S. Beh†‡ , Diana De Porcellinis†#, Rebecca L. Gracia∥, Kay T. Xia∥, Roy G. Gordon*†‡, and Michael J. Aziz*
† John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, United States
‡ Department of Chemistry and Chemical Biology, Harvard University, Cambridge, Massachusetts 02138, United States
# Department of Chemical Science and Technologies, University of Rome “Tor Vergata”, 00133 Rome, Italy
∥ Harvard College, Cambridge, Massachusetts 02138, United States
ACS Energy Lett., 2017, 2 (3), pp 639–644
DOI: 10.1021/acsenergylett.7b00019
Publication Date (Web): February 7, 2017
Copyright © 2017 American Chemical Society
*E-mail: gordon@chemistry.harvard.edu., *E-mail: maziz@harvard.edu.

Abstract
Abstract Image
We demonstrate an aqueous organic and organometallic redox flow battery utilizing reactants composed of only earth-abundant elements and operating at neutral pH. The positive electrolyte contains bis((3-trimethylammonio)propyl)ferrocene dichloride, and the negative electrolyte contains bis(3-trimethylammonio)propyl viologen tetrachloride; these are separated by an anion-conducting membrane passing chloride ions. Bis(trimethylammoniopropyl) functionalization leads to ∼2 M solubility for both reactants, suppresses higher-order chemical decomposition pathways, and reduces reactant crossover rates through the membrane. Unprecedented cycling stability was achieved with capacity retention of 99.9943%/cycle and 99.90%/day at a 1.3 M reactant concentration, increasing to 99.9989%/cycle and 99.967%/day at 0.75–1.00 M; these represent the highest capacity retention rates reported to date versus time and versus cycle number. We discuss opportunities for future performance improvement, including chemical modification of a ferrocene center and reducing the membrane resistance without unacceptable increases in reactant crossover. This approach may provide the decadal lifetimes that enable organic–organometallic redox flow batteries to be cost-effective for grid-scale electricity storage, thereby enabling massive penetration of intermittent renewable electricity.
Flow batteries store energy in liquid solutions in external tanks—the bigger the tanks, the more energy they store. Flow batteries are a promising storage solution for renewable, intermittent energy like wind and solar but today’s flow batteries often suffer degraded energy storage capacity after many charge-discharge cycles, requiring periodic maintenance of the electrolyte to restore the capacity.

By modifying the structures of molecules used in the positive and negative electrolyte solutions, and making them water soluble, the Harvard team was able to engineer a battery that loses only one percent of its capacity per 1000 cycles.

Lithium ion batteries don’t even survive 1000 complete charge/discharge cycles,” said Aziz.

Because we were able to dissolve the electrolytes in neutral water, this is a long-lasting battery that you could put in your basement,” said Gordon. “If it spilled on the floor, it wouldn’t eat the concrete and since the medium is noncorrosive, you can use cheaper materials to build the components of the batteries, like the tanks and pumps.

This reduction of cost is important. The Department of Energy (DOE) has set a goal of building a battery that can store energy for less than $100 per kilowatt-hour, which would make stored wind and solar energy competitive to energy produced from traditional power plants.

If you can get anywhere near this cost target then you change the world,” said Aziz. “It becomes cost effective to put batteries in so many places. This research puts us one step closer to reaching that target.

This work on aqueous soluble organic electrolytes is of high significance in pointing the way towards future batteries with vastly improved cycle life and considerably lower cost,” said Imre Gyuk, Director of Energy Storage Research at the Office of Electricity of the DOE. “I expect that efficient, long duration flow batteries will become standard as part of the infrastructure of the electric grid.

The key to designing the battery was to first figure out why previous molecules were degrading so quickly in neutral solutions, said Eugene Beh, a postdoctoral fellow and first author of the paper. By first identifying how the molecule viologen in the negative electrolyte was decomposing, Beh was able to modify its molecular structure to make it more resilient.

Next, the team turned to ferrocene, a molecule well known for its electrochemical properties, for the positive electrolyte.

Ferrocene is great for storing charge but is completely insoluble in water,” said Beh. “It has been used in other batteries with organic solvents, which are flammable and expensive.

But by functionalizing ferrocene molecules in the same way as with the viologen, the team was able to turn an insoluble molecule into a highly soluble one that could also be cycled stably.

Aqueous soluble ferrocenes represent a whole new class of molecules for flow batteries,” said Aziz.

The neutral pH should be especially helpful in lowering the cost of the ion-selective membrane that separates the two sides of the battery. Most flow batteries today use expensive polymers that can withstand the aggressive chemistry inside the battery. They can account for up to one third of the total cost of the device. With essentially salt water on both sides of the membrane, expensive polymers can be replaced by cheap hydrocarbons.

This research was coauthored by Diana De Porcellinis, Rebecca Gracia, and Kay Xia. It was supported by the Office of Electricity Delivery and Energy Reliability of the DOE and by the DOE’s Advanced Research Projects Agency-Energy.

With assistance from Harvard’s Office of Technology Development (OTD), the researchers are working with several companies to scale up the technology for industrial applications and to optimize the interactions between the membrane and the electrolyte. Harvard OTD has filed a portfolio of pending patents on innovations in flow battery technology.

ORIGINAL: Daily Accord
Credit: Harvard University
Feb 9, 2017 

domingo, 24 de diciembre de 2017

MIT Just Created Living Plants That Glow Like A Lamp, And Could Grow Glowing Trees To Replace Streetlights


Roads of the future could be lit by glowing trees instead of streetlamps, thanks to a breakthrough in creating bioluminescent plants. Experts injected specialized nanoparticles into the leaves of a watercress plant, which caused it to give off a dim light for nearly four hours. This could solve lots of problems.

The chemical involved, which produced enough light to read a book by, is the same as is used by fireflies to create their characteristic shine. To create their glowing plants, engineers from the Massachusetts Institute of Technology (MIT) turned to an enzyme called luciferase. Luciferase acts on a molecule called luciferin, causing it to emit light.
 
Roads of the future could be lit by glowing trees instead of streetlamps, thanks to a breakthrough in creating bioluminescent plants. Experts created a watercress plant which caused it to glow for nearly four hours and gave off enough light to illuminate this book
Another molecule called Co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity. The MIT team packaged each of these components into a different type of nanoparticle carrier.

The nanoparticles help them to get to the right part of the plant and also prevent them from building to concentrations that could be toxic to the plants. The result was a watercress plant that functioned like a desk lamp.

Researchers believe with further tweaking, the technology could also be used to provide lights bright enough to illuminate a workspace or even an entire street, as well as low-intensity indoor lighting

Michael Strano, professor of chemical engineering at MIT and the senior author of the study, said: 'The vision is to make a plant that will function as a desk lamp — a lamp that you don't have to plug in. The light is ultimately powered by the energy metabolism of the plant itself. Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes.'

Luciferases make up a class of oxidative enzymes found in several species that enable them to 'bioluminesce', or emit light.
Fireflies are able to emit light via a chemical reaction.

In the chemical reaction luciferin is converted to oxyluciferin by the luciferase enzyme. Some of the energy released by this reaction is in the form of light. The reaction is highly efficient, meaning nearly all the energy put into the reaction is rapidly converted to light.

Lighting accounts for around 20 per cent of worldwide energy consumption, so replacing them with naturally bioluminescent plants would represent a significant cut to CO2 emissions. The researchers’ early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours.

The light generated by one ten centimetre (four inch) watercress seedling is currently about one-thousandth of the amount needed to properly read by, but it was enough to illuminate the words on a page of John Milton's Paradise Lost.

The MIT team believes it can boost the light emitted, as well as the duration of light, by further optimising the concentration and release rates of the chemical components. For future versions of this technology, the team hopes to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources. 

The researchers have also demonstrated that they can turn the light off by adding nanoparticles carrying a luciferase inhibitor. This could enable them to eventually create plants that shut off their light emission in response to environmental conditions such as sunlight, they say.

The full findings of the study were published in the American Chemical Society journal Nano Letters.

ORIGINAL: The Space Academy
December 18, 2017

viernes, 22 de diciembre de 2017

Electric eel inspires bio-friendly power source, what happens next may shock you

Could a device inspired by the electric eel offer a safer way to power medical implants?
Scientists are always on the lookout for safer, more natural ways to power devices that go into our bodies. After all, who really needs toxic battery elements and replacement surgery?

One organism that is pretty good at generating biocompatible power (for itself, at least) is the electric eel, and scientists have now used the high-voltage species as a blueprint for a promising new self-charging device that could one day power things like pacemakers, prosthetics and even augmented reality contact lenses.

Electric eels generate voltage through long stacks of thin cells that run end-on-end through their bodies. Called electrocytes, these cells create electricity by allowing sodium ions to rush into one end and potassium ions out the other, all at the same time. The voltage created by each cell is small, but together, the stacks within a single eel can generate as many as 600 V.

To recreate this effect, researchers from the University of Fribourg, the University of Michigan and the University of California San Diego turned to the difference in salinity between fresh and saltwater. They deposited hydrogel, ion-conducting blobs onto clear plastic sheets and separated them with ion-selective membranes.

Hundreds of blobs containing salt and freshwater were arranged in an alternating pattern. When the team had all these gel compartments make contact with one another, they were able to generate 100 V through what is known as reverse electrodialysis, where energy is generated through differing salt concentrations in the water.

While the eel triggers the simultaneous contact of its electrocytes using a neurotransmitter called acetylcholine as the command signal, the team achieved this by carefully working a special origami pattern – called a Miura-ori fold – into the plastic sheet. This meant that when pressure was applied to the sheet, it quickly snapped together and the cells shifted into exactly the right positions to create the electricity.

The device, which the team calls an artificial electric organ, isn't in the same ball park as an eel in terms of output, but the researchers do have some ideas around how to boost its efficiency. It points to the metabolic energy created by ion differences in the eel's stomach, or the mechanical muscle energy, as some of the possibilities, but does note that recreating these would be a major challenge.

"The electric organs in eels are incredibly sophisticated, they're far better at generating power than we are," Mayer said. "But the important thing for us was to replicate the basics of what's happening."

The research was published in the journal Nature. You can hear from Mayer in the video below.


 



Source: University of Fribourg, University of Michigan

ORIGINAL: NewAtlas
Nick Lavars
December 14th, 2017

jueves, 14 de diciembre de 2017

512-year-old Greenland shark may be the oldest living vertebrate on Earth

Images via Wikimedia and Julius Nelson
A recently identified 512-year-old Greenland shark may be the world’s oldest living vertebrate. Although scientists discovered the 18-foot fish in the North Atlantic months ago, its age was only recently revealed in a study published in the journal Science. Greenland sharks have the longest lifespan of any vertebrate animal, so it is perhaps unsurprising that the species would boast the oldest living individual vertebrate as well. Nonetheless, the fact that this creature may have been born as early as 1505 is remarkable. “It definitely tells us that this creature is extraordinary and it should be considered among the absolute oldest animals in the world,” said marine biologist Julius Nelson, whose research team studied the shark’s longevity.

Images via Wikimedia and Julius Nelson
To determine the shark’s age, scientists used a mathematical model that analyzes the lens and cornea of a shark’s eye and links size of the shark to its age. Greenland sharks grow at a rate of about 1 centimeter per year, which allowed scientists to estimate a particular shark’s age. The ability to measure the age of this mysterious shark is relatively new. “Fish biologists have tried to determine the age and longevity of Greenland sharks for decades, but without success,” said Steven Campana, a shark expert from the University of Iceland. “Given that this shark is the apex predator (king of the food chain) in Arctic waters, it is almost unbelievable that we didn’t know whether the shark lives for 20 years, or for 1,000 years.

Images via Wikimedia and Julius Nelson
The Greenland shark thrives in the frigid waters of the North Atlantic. Despite its considerable size, comparable to that of a great white shark, the Greenland shark is a scavenger and has never been observed hunting. Its diet primarily consists of fish, though remains of reindeer, polar bear, moose, and seals have been found in the species’ stomachs. To cope with life in deep water, the living tissues of a Greenland shark contains high levels of trimethylamine N-oxide, which makes the meat toxic. However, when the flesh is fermented, it can be consumed, as it is in Iceland as a dish known as Kæstur hákarl.

Images via Wikimedia and Julius Nelson

ORIGINAL: Inhabitat
by Greg Beach
2017/12/15

miércoles, 29 de noviembre de 2017

Semi-Synthetic Life Form Now Fully Armed and Operational

WILLIAM B. KIOSSES, PHD, THE SCRIPPS RESEARCH INSTITUTE

Could life have evolved differently? A germ with “unnatural” DNA letters suggests the answer is yes.

E. coli bacteria with an expanded genetic code could help manufacture new drugs.

Every living thing on Earth stores the instructions for life as DNA, using the four genetic bases A, G, C, and T.

All except one, that is.

In the San Diego laboratory of Floyd Romesberg—and at a startup he founded—grow bacteria with an expanded genetic code. They have two more letters, an “unnatural” pair he calls X and Y.

Romesberg, head of a laboratory at the Scripps Research Institute, first amended the genes of the bacterium E. Coli to harbor the new DNA components in 2014. Now, for the first time, the germs are using their expanded code to manufacture proteins with equally unusual components.

We wanted to prove the concept that every step of information storage and retrieval could be mediated by an unnatural base pair,” he says. “It’s not a curiosity anymore.

The bacterium is termed a “semi-synthetic” organism, since while it harbors an expanded alphabet, the rest of the cell hasn’t been changed. Even so, Peter Carr, a biological engineer at MIT’s Lincoln Laboratory, says it suggests that scientists are only beginning to learn how far life can be redesigned, a concept known as synthetic biology.

We don’t know what the ultimate limits are on our ability to engineer living systems, and this paper helps show we’re not limited to four bases,” he says. “I think it’s pretty impressive.

Humankind has been disappointed in the quest to find life on Mars or Jupiter. Yet the alien germs growing in San Diego already hint that our Earth biology isn’t the only one possible. “It suggests that if life did evolve elsewhere, it might have done so using very different molecules or different forces,” says Romesberg. “Life as we know it is may not be the only solution, and may not be the best one.”

Romesberg’s efforts to lay a genetic cuckoo’s egg inside bacteria started 15 years ago. After creating a candidate pair of new genetic letters, the first step was to add them to a bacterium’s genome and show it could use them to store information. That is, could the organism abide by the unnatural DNA and also copy it faithfully as it divided?

The answer, his lab showed in 2014, was yes. But early versions of the bacteria were none too healthy. They died or got rid of the extra letters in their DNA, which are stored in a mini-chromosome called a plasmid. In Romesberg’s words, his creations “lacked the fortitude of real life.”

By this year, the team had devised a more stable bacterium. But it wasn’t enough to endow the germ with a partly alien code—it needed to use that code to make a partly alien protein. That’s what Romesberg’s team, reporting today in the journal Nature, says it has done.

Using the extra letters, they instructed bacteria to manufacture a glowing green protein that has in it a single unnatural amino acid. “We stored information, and now we retrieved it. The next thing is to use it. We are going to do things no one else can,” says Romesberg.

The practical payoff of an organism with a bigger genetic alphabet is that it has a bigger vocabulary—it can assemble proteins with components not normally found in nature. That could solve some tricky problems in medicinal chemistry, which is the art of shaping molecules so they do exactly what’s wanted in the body, and nothing that isn’t.

Pursuing such aims is a startup Romesberg founded, named Synthorx. It has raised $16 million so far and hopes to turn the science into new drugs. One project aims to make a new version of interleukin-2, an anticancer drug with some nasty side effects. Maybe the semi-synthetic germs could fix that by swapping in some unusual components at key points. “This company needs to get out of the lab and into the clinic,” says its newly installed CEO, Laura Shawver.

Carr says an expanded genetic code could have implications beyond providing a shortcut for programming new properties into proteins. He also thinks the new letters might be used to hide information in ways other biologists couldn’t easily see. That could be useful in concealing intellectual property or, perhaps, to disguise a bioweapon.

Synthorx Inc 2015


Credit: William B. Kiosses, PhD, The Scripps Research Institute


November 29, 2017
William B. Kiosses, PhD, The Scripps Research Institute

viernes, 10 de noviembre de 2017

The Fungus That Turns Ants Into Zombies Is More Diabolical Than We Realized

A dead spiny ant with fungal spores erupting out of its head. (Image: David Hughes/Penn State University)

Carpenter ants of the Brazilian rain forest have it rough. When one of these insects gets infected by a certain fungus, it turns into a so-called “zombie ant” and is no longer in control of its actions. Manipulated by the parasite, an infected ant will leave the cozy confines of its arboreal home and head to the forest floor—an area more suitable for fungal growth. After parking itself on the underside of a leaf, the zombified ant anchors itself into place by chomping down onto the foliage. This marks the victim’s final act. From here, the fungus continues to grow and fester inside the ant’s body, eventually piercing through the ant’s head and releasing its fungal spores. This entire process, from start to finish, can take upwards of ten agonizing days.“We found that a high percentage of the cells in a host were fungal cells,” said Hughes. “In essence, these manipulated animals were a fungus in ants’ clothing.

We’ve known about zombie ants for quite some time, but scientists have struggled to understand how the parasitic fungus, O. unilateralis (pronounced yu-ni-lat-er-al-iss), performs its puppeteering duties. This fungus is often referred to as a “brain parasite,” but new research published this week in Proceedings of the National Academy of Sciences shows that the brains of these zombie ants are left intact by the parasite, and that O. unilateralis is able to control the actions of its host by infiltrating and surrounding muscle fibers throughout the ant’s body. In effect, it’s converting an infected ant into an externalized version of itself. Zombie ants thus become part insect, part fungus. Awful, right?

To make this discovery, the scientist who first uncovered the zombie ant fungus, David Hughes from Penn State, launched a multidisciplinary effort that involved an international team of entomologists, geneticists, computer scientists, and microbiologists. The point of the study was to look at the cellular interactions between O. unilateralis and the carpenter ant host Camponotus castaneus during a critical stage of the parasite’s life cycle—that phase when the ant anchors itself onto the bottom of leaf with its powerful mandibles.
Ants infected with late stage O. unilateralis infection. (Image: David Hughes/PLOS ONE)

The fungus is known to secrete tissue-specific metabolites and cause changes in host gene expression as well as atrophy in the mandible muscles of its ant host,” said lead author Maridel Fredericksen, a doctoral candidate at the University of Basel Zoological Institute, Switzerland, in a statement. “The altered host behavior is an extended phenotype of the microbial parasite’s genes being expressed through the body of its host. But it’s unknown how the fungus coordinates these effects to manipulate the host’s behavior.

By referring to the parasite’s “extended phenotype,” Fredericksen is referring to the way that O. unilateralis is able to hijack an external entity, in this case the carpenter ant, and make it a literal extension of its physical self.

For the study, the researchers infected carpenter ants with either O. unilateralis or a less threatening, non-zombifying fungal pathogen known as Beauveria bassiana, which served as the control. By comparing the two different fungi, the researchers were able to discern the specific physiological effects of O. unilateralis on the ants.

Using electron microscopes, the researchers created 3D visualizations to determine location, abundance, and activity of the fungi inside the bodies of the ants. Slices of tissue were taken at a resolution of 50 nanometers, which were captured using a machine that could repeat the slicing and imaging process at a rate of 2,000 times over a 24-hour period. To parse this hideous amount of data, the researchers turned to artificial intelligence, whereby a machine-learning algorithm was taught to differentiate between fungal and ant cells. This allowed the researchers to determine how much of the insect was still ant, and how much of it was converted into the externalized fungus.
3D reconstruction of an ant mandible adductor muscle (red) surrounded by a network of fungal cells (yellow). (Image: Hughes Laboratory/Penn State)


The results were truly disturbing. Cells of O. unilateralis had proliferated throughout the entire ant’s body, from the head and thorax right down to the abdomen and legs. What’s more, these fungal cells were all interconnected, creating a kind of Borg-like, collective biological network that controlled the ants’ behavior.

We found that a high percentage of the cells in a host were fungal cells,” said Hughes in a statement. “In essence, these manipulated animals were a fungus in ants’ clothing.

But most surprising of all, the fungus hadn’t infiltrated the carpenter ants’ brains.

Normally in animals, behavior is controlled by the brain sending signals to the muscles, but our results suggest that the parasite is controlling host behavior peripherally,” explained Hughes. “Almost like a puppeteer pulls the strings to make a marionette move, the fungus controls the ant’s muscles to manipulate the host’s legs and mandibles.

As to how the fungus is able to navigate the ant towards the leaf, however, is still largely unknown. And in fact, that the fungus leaves the brain alone may provide a clue. Previous work showed that the fungus may be chemically altering the ants’ brains, leading Hughes’ team to speculate that the fungus needs to the ant to survive long enough to perform its final leaf-biting behavior. It’s also possible, however, that the fungus needs to leverage some of that existing ant brain power (and attendant sensorial capabilities) to “steer” the ant around the forest floor. Future research will be required to turn these theories into something more substantial.

This is an excellent example of how interdisciplinary research can drive our knowledge forward,” Charissa de Bekker, an entomologist at the University of Central Florida not affiliated with the new study, told Gizmodo. “The researchers used cutting-edge techniques to finally confirm something that we thought to be true but weren’t sure about: that the fungus O. unilateralis does not invade or damage the brain.

de Bekker says this work confirms that something much more intricate is going on, and that the fungus might be controlling the ant by secreting compounds that can work as neuromodulators. Data gleaned from the fungal genome points to this conclusion as well.

This means the fungus might produce a wealth of bioactive compounds that could be of interest in terms of novel drug discovery,” said de Bekker. “I am, thus, very excited about this work!

An authority on the zombie ant fungus herself, de Bekker also released new research this week. Her new study, published in PLOS One and co-authored with David Hughes and others, looked into the molecular clock of the Ophiocordyceps kimflemingiae fungus (a recently named species of the O. unilateralis complex) to see if the daily rhythms, and thus biological clocks, are an important aspect of the parasite-host interactions studied by biologists.

In addition to confirming that the fungus indeed has a molecular clock, we found that this results in the daily oscillation of certain genes,” de Bekker told Gizmodo. “While some of them are active during the day-time, others are active during the night-time. Interestingly, we found that the fungus especially activates genes encoding for secreted proteins during the night-time. These are the compounds that possibly interact with the host’s brain! The fungus, therefore, does not just release bioactive compounds to manipulate behavior, but there seems to be a precise timing to it as well.

There’s clearly still lots to learn about this insidious parasite and how it hijacks its insectoid hosts, but as these recent studies attest, we’re getting steadier closer to the answer—one that’s clearly disturbing in nature.

[Proceedings of the National Academy of Sciences, PLOS One
ORIGINAL: Gizmodo
By George Dvorsky

sábado, 21 de octubre de 2017

Miniature water droplets could solve an origin-of-life riddle, Stanford researchers find

Before life could begin, something had to kickstart the production of critical molecules. That something may have been as simple as a mist made up of tiny drops of water.

It is one of the great ironies of biochemistry:  life on Earth could not have begun without water; yet water stymies some chemical reactions necessary for life itself.
Chemistry Professor Richard Zare(Image credit: L.A. Cicero)
Now, researchers report today in Proceedings of the National Academy of Sciences, they have found a novel, even poetic solution to the so-called “water problem” in the form of miniature droplets of water, formed perhaps in the mist of a crashing ocean wave or the clouds in the sky.

The water problem relates primarily to the element phosphorous, which is attached to a variety of life’s molecules through a process called phosphorylation. “You and I are alive because of phosphorus and phosphorylation,” said Richard Zare, a professor of chemistry and one of the paper’s senior authors. “You can’t have life without phosphorous.”

The water problem
Phosphorous is a necessary ingredient in many molecules critical for life, 
  • including our DNA, 
  • it’s relative RNA and 
  • in the molecule that makes up our body’s energy storage system, called ATP. 
But ordinarily water gets in the way of producing those chemicals. Modern life has evolved ways of sidestepping that problem in the form of enzymes that help phosphorylation along. But how primitive components of these molecules formed before the workarounds evolved remains a controversial and at times slightly oddball subject. Among the proposed solutions are highly reactive forms of extraterrestrial phosphorous and heating powered by naturally occurring nuclear reactions.

Microdroplets solve the phosphorylation problem in a relatively elegant way, in large part because they have geometry on their side. It turns out that water is mostly a problem when the phosphate is floating around inside a pool of water or a primitive ocean, rather than on its surface.

Microdroplets are mostly surface. They perfectly optimize the need for life to form in and around water, but with enough surface area for phosphorylation and other reactions to occur.

In fact, the large amount of surface area provided by microdroplets is already known to be a great place for chemistry. Previous experiments suggest microdroplets can increase reaction rates for other processes by a thousand or even a million times, depending on the details of the reaction being studied.

Spontaneous molecules
Microdroplets seemed like a possible solution to the water problem. But to show that they really work, Zare and his colleagues sprayed tiny droplets of water, laced with phosphorous and other chemicals, into a chamber where the resulting compounds could be analyzed. They found several phosphate-containing molecules occurred spontaneously on these lab-made microdroplets without any catalyst to get them started. Those molecules included sugar phosphates, which are a step in how our cells create energy, and one of the molecules that make up RNA, a DNA relative that primitive organisms use to carry their genetic code. Both reactions are rare at best in larger volumes of water.

That observation, joined with the fact that microdroplets are ubiquitous – from clouds in the sky to the mist created by a crashing ocean wave – suggests that they could have played a role in fostering life on Earth. In the future, Zare hopes to look for phosphates that make up proteins and other molecules.

Even if he can produce those compounds, however, Zare does not believe he and his colleagues will have found the one true solution to the origin of life. “I don’t think we’re going to understand exactly how life began on Earth,” said Zare, who is also the Marguerite Blake Wilbur Professor in Natural Science. Essentially, he said, that is because no one can go back in time to watch what happened as life emerged and there is no good fossil record for the formation of biomolecules. “But we could understand some of the possibilities,” he added.

Zare is also a member of the Stanford Cardiovascular Institute, the Stanford Cancer Institute, the Stanford Neurosciences Institute and the Stanford Woods Institute for the Environment. Additional Stanford authors are postdoctoral fellows Inho Nam and Jae Kyoo Lee. Hong Gil Nam of DGIST in South Korea is co-senior author with Zare. The work was supported by the Institute for Basic Science (South Korea) and the U. S. Air Force Office of Scientific Research through a Basic Research Initiative grant.


ORIGINAL: Stanford News
BY NATHAN COLLINS
OCTOBER 20, 2017

New Research Points to a Genetic Switch That Can Let Our Bodies Talk to Electronics


Shutterstock
IN BRIEF
Our bodies are biologically based and therefore are not equipped to communicate with electronics efficiently. New research could make it possible to genetically engineer our cells to be able to communicate with electronics.

The development has the potential to allow us to eventually build apps that autonomously detect and treat disease.

Microelectronics has transformed our lives. Cellphones, earbuds, pacemakers, defibrillators – all these and more rely on microelectronics’ very small electronic designs and components. Microelectronics has changed the way we collect, process and transmit information.

Such devices, however, rarely provide access to our biological world; there are technical gaps. We can’t simply connect our cellphones to our skin and expect to gain health information. For instance, is there an infection? What type of bacteria or virus is involved? We also can’t program the cellphone to make and deliver an antibiotic, even if we knew whether the pathogen was Staph or Strep. There’s a translation problem when you want the world of biology to communicate with the world of electronics.

The research we’ve just published with colleagues in Nature Communications brings us one step closer to closing that communication gap.
Electronic control of gene expression and cell behaviour in Escherichia coli through redox signalling

ABSTRACT:
The ability to interconvert information between electronic and ionic modalities has transformed our ability to record and actuate biological function. Synthetic biology offers the potential to expand communication ‘bandwidth’ by using biomolecules and providing electrochemical access to redox-based cell signals and behaviours. While engineered cells have transmitted molecular information to electronic devices, the potential for bidirectional communication stands largely untapped. Here we present a simple electrogenetic device that uses redox biomolecules to carry electronic information to engineered bacterial cells in order to control transcription from a simple synthetic gene circuit. Electronic actuation of the native transcriptional regulator SoxR and transcription from the PsoxS promoter allows cell response that is quick, reversible and dependent on the amplitude and frequency of the imposed electronic signals. Further, induction of bacterial motility and population based cell-to-cell communication demonstrates the versatility of our approach and potential to drive intricate biological behaviours.

Source: NATURE COMMS
Rather than relying on the usual molecular signals, like hormones or nutrients, that control a cell’s gene expression, we created a synthetic “switching” system in bacterial cells that recognizes electrons instead. This new technology – a link between electrons and biology – may ultimately allow us to program our phones or other microelectronic devices to autonomously detect and treat disease.

COMMUNICATING WITH ELECTRONS, NOT MOLECULES
One of the barriers scientists have encountered when trying to link microelectronic devices with biological systems has to do with information flow. In biology, almost all activity is made possible by the transfer of molecules like
  • glucose, 
  • epinephrine, 
  • cholesterol and 
  • insulin 
signaling between cells and tissues. Infecting bacteria secrete molecular toxins and attach to our skin using molecular receptors. To treat an infection, we need to detect these molecules to identify the bacteria, discern their activities and determine how to best respond.

Microelectronic devices don’t process information with molecules. A microelectronic device typically has silicon, gold, chemicals like boron or phosphorus and an energy source that provides electrons. By themselves, they’re poorly suited to engage in molecular communication with living cells.

Free electrons don’t exist in biological systems so there’s almost no way to connect with microelectronics. There is, however, a small class of molecules that stably shuttle electrons. These are called “redox” molecules; they can transport electrons, sort of like wire does. The difference is that in wire, the electrons can flow freely to any location within; redox molecules must undergo chemical reactions – oxidation or reduction reactions – to “hand off” electrons.
Bacteria are engineered to respond to a redox molecule activated by an electrode by creating an electrogenetic switch. Bentley and Payne, CC BY-ND

TURNING CELLS ON AND OFF
Capitalizing on the electronic nature of redox molecules, we genetically engineered bacteria to respond to them. We focused on redox molecules that could be “programmed” by the electrode of a microelectronic device. The device toggles the molecule’s oxidation state – it’s either 
  • oxidized (loses an electron) or 
  • reduced (gains an electron). 
The electron is supplied by a typical energy source in electronics like a battery.

We wanted our bacteria cells to turn “on” and “off” due to the applied voltage – voltage that oxidized a naturally occurring redox molecule, pyocyanin.

Electrically oxidizing pyocyanin allowed us to control our engineered cells, turning them on or off so they would synthesize (or not) a fluorescent protein. We could rapidly identify what was happening in these cells because the protein emits a green hue.
(a) Device-mediated electronic input consists of applied potential (blue or red step functions) for controlling the oxidation state of redox-mediators (transduced input). Redox mediators intersect with cells to actuate transcription and, depending on actuated gene-of-interest, control biological output.
(b) The electrogenetic device consists of the region encompassing the gene coding for the SoxR protein and the divergent overlapping PsoxR/PsoxS promoters. A gene of interest is placed downstream of the PsoxS promoter. Pyo (O) initiates gene induction and Fcn(R/O), through interactions with respiratory machinery, allows electronic control of induction level. Fcn (R/O), ferro/ferricyanide; Pyo, pyocyanin. The oxidation state of both redox mediators is colorimetrically indicated (Fcn (O) is yellow pentagon; Fcn (R) is white pentagon; Pyo (O) is blue hexagon; Pyo (R) is grey hexagon). Encircled ‘e−‘ and arrows indicate electron movement.

In another example
, we made bacteria that, when switched on, would swim from a stationary position. Bacteria normally swim in starts and stops referred to as a “run” or a “tumble.” The “run” ensures they move in a straight path. When they “tumble,” they essentially remain in a one spot. A protein called CheZ controls the “run” portion of bacteria’s swimming activity. Our electrogenetic switch turned on the synthesis of CheZ, so that the bacteria could move forward.
Bacteria can naturally join forces as biofilms and work together. CDC/Janice Carr, CC BY
We were also able to electrically signal a community of cells to exhibit collective behavior. We made cells with switches controlling the synthesis of a signaling molecule that diffuses to neighboring cells and, in turn, causes changes in their behavior. Electric current turned on cells that, in turn, “programmed” a natural biological signaling process to alter the behavior of nearby cells. We exploited bacterial quorum sensing – a natural process where bacterial cells “talk” to their neighbors and the collection of cells can behave in ways that benefit the entire community.

Perhaps even more interesting, our groups showed that we could both turn on gene expression and turn it off. By reversing the polarity on the electrode, the oxidized pyocyanin becomes reduced – its inactive form. Then, the cells that were turned on were engineered to quickly revert back to their original state. In this way, the group demonstrated the ability to cycle the electrically programmed behavior on and off, repeatedly.

Interestingly, the on and off switch enabled by pyocyanin was fairly weak. By including another redox molecule, ferricyanide, we found a way to amplify the entire system so that the gene expression was very strong, again on and off. The entire system was robust, repeatable and didn’t negatively affect the cells.

SENSING AND RESPONDING ON A CELLULAR LEVEL
Armed with this advance, devices could potentially electrically stimulate bacteria to make therapeutics and deliver them to a site. For example, imagine swallowing a small microelectronic capsule that could record the presence of a pathogen in your GI tract and also contain living bacterial factories that could make an antimicrobial or other therapyall in a programmable autonomous system.

This current research ties into previous work done here at the University of Maryland where researchers had discovered ways to “record” biological information, by sensing the biological environment, and based on the prevailing conditions, “write” electrons to devices. We and our colleagues “sent out” redox molecules from electrodes, let those molecules interact with the microenvironment near the electrode and then drew them back to the electrode so they could inform the device on what they’d seen. This mode of “molecular communication” is somewhat analogous to sonar, where redox molecules are used instead of sound waves.

These molecular communication efforts were used to identify pathogens, monitor the “stress” in blood levels of individuals with schizophrenia and even determine the differences in melanin from people with red hair. For nearly a decade, the Maryland team has developed methodologies to exploit redox molecules to interrogate biology by directly writing the information to devices with electrochemistry.

Perhaps it is now time to integrate these technologies:

  • Use molecular communication to sense biological function and transfer the information to a device. 
  • Then use the device – maybe a small capsule or perhaps even a cellphone – to program bacteria to make chemicals and other compounds that issue new directions to the biological system. 

It may sound fantastical, many years away from practical uses, but our team is working hard on such valuable applications…stay tuned!

ORIGINAL: Futurism

miércoles, 18 de octubre de 2017

Stunning AI Breakthrough Takes Us One Step Closer to the Singularity

As a new Nature paper points out, “There are an astonishing 10 to the power of 170 possible board configurations in Go—more than the number of atoms in the known universe.” (Image: DeepMind)
Remember AlphaGo, the first artificial intelligence to defeat a grandmaster at Go?
Well, the program just got a major upgrade, and it can now teach itself how to dominate the game without any human intervention. But get this: In a tournament that pitted AI against AI, this juiced-up version, called AlphaGo Zero, defeated the regular AlphaGo by a whopping 100 games to 0, signifying a major advance in the field. Hear that? It’s the technological singularity inching ever closer.

A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself. The only input it had was what it does to the positions of the black and white pieces on the board.
  • In addition to devising completely new strategies
  • the new system is also considerably leaner and meaner than the original AlphaGo.
Lee Sedol getting crushed by AlphaGo in 2016. (Image: AP)
Now, every once in a while the field of AI experiences a “holy shit” moment, and this would appear to be one of those moments. Looking back, other “holy shit” moments include:
This latest achievement qualifies as a “holy shit” moment for a number of reasons.

First of all, the original AlphaGo had the benefit of learning from literally thousands of previously played Go games, including those played by human amateurs and professionals. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. Using “reinforcement learning,” AGZ played itself over and over again, “starting from random play, and without any supervision or use of human data,” according to the Google-owned DeepMind researchers in their study. This allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from experience. This basically means that AlphaGo Zero was its own teacher.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” notes the DeepMind team in a release. “Instead, it is able to learn tabula rasa [from a clean slate] from the strongest player in the world: AlphaGo itself.

When playing Go, the system considers the most probable next moves (a “policy network”), and then estimates the probability of winning based on those moves (its “value network”). AGZ requires about 0.4 seconds to make these two assessments. The original AlphaGo was equipped with a pair of neural networks to make similar evaluations, but for AGZ, the Deepmind developers merged the policy and value networks into one, allowing the system to learn more efficiently. What’s more, the new system is powered by four tensor processing units (TPUS)—specialized chips for neural network training. Old AlphaGo needed 48 TPUs.

After just three days of self-play training and a total of 4.9 million games played against itself, AGZ acquired the expertise needed to trounce AlphaGo (by comparison, the original AlphaGo had 30 million games for inspiration). After 40 days of self-training, AGZ defeated another, more sophisticated version of AlphaGo called AlphaGo “Master” that defeated the world’s best Go players and the world’s top ranked Go player, Ke Jie. Earlier this year, both the original AlphaGo and AlphaGo Master won a combined 60 games against top professionals. The rise of AGZ, it would now appear, has made these previous versions obsolete.

The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.

This is a major achievement for AI, and the subfield of reinforcement learning in particular. By teaching itself, the system matched and exceeded human knowledge by an order of magnitude in just a few days, while also developing 
  • unconventional strategies and 
  • creative new moves
For Go players, the breakthrough is as sobering as it is exciting; they’re learning things from AI that they could have never learned on their own, or would have needed an inordinate amount of time to figure out.
[AlphaGo Zero’s] games against AlphaGo Master will surely contain gems, especially because its victories seem effortless,” wrote Andy Okun and Andrew Jackson, members of the American Go Association, in a Nature News and Views article. “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic... The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.”
No doubt, AGZ represents a disruptive advance in the world of Go, but what about its potential impact on the rest of the world? According to Nick Hynes, a grad student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), it’ll be a while before a specialized tool like this will have an impact on our daily lives.

So far, the algorithm described only works for problems where there are a countable number of actions you can take, so it would need modification before it could be used for continuous control problems like locomotion [for instance],” Hynes told Gizmodo. “Also, it requires that you have a really good model of the environment. In this case, it literally knows all of the rules. That would be as if you had a robot for which you could exactly predict the outcomes of actions—which is impossible for real, imperfect physical systems.

The nice part, he says, is that there are several other lines of AI research that address both of these issues (e.g. machine learning, evolutionary algorithms, etc.), so it’s really just a matter of integration. “The real key here is the technique,” says Hynes.

It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel...Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.
As expected—and desired—we’re moving farther away from the classic pattern of getting a bunch of human-labeled data and training a model to imitate it,” he said. “What we’re seeing here is a model free from human bias and presuppositions: It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same. It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel,” to which he added: “Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.

Noam Brown, a Carnegie Mellon University computer scientist who helped to develop the first AI to defeat top humans in no-limit poker, says the DeepMind researchers have achieved an impressive result, and that it could lead to bigger, better things in AI.

While the original AlphaGo managed to defeat top humans, it did so partly by relying on expert human knowledge of the game and human training data,” Brown told Gizmodo. “That led to questions of whether the techniques could extend beyond Go. AlphaGo Zero achieves even better performance without using any expert human knowledge. It seems likely that the same approach could extend to all perfect-information games [such as chess and checkers]. This is a major step toward developing general-purpose AIs.

As both Hynes and Brown admit, this latest breakthrough doesn’t mean the technological singularity—that hypothesized time in the future when greater-than-human machine intelligence achieves explosive growth—is imminent. But it should cause pause for thought. Once 
  • we teach a system the rules of a game or 
  • the constraints of a real-world problem, 
the power of reinforcement learning makes it possible to simply press the start button and let the system do the rest. It will then figure out the best ways to succeed at the task, devising solutions and strategies that are beyond human capacities, and possibly even human comprehension.

As noted, AGZ and the game of Go represent an oversimplified, constrained, and highly predictable picture of the world, but in the future, AI will be tasked with more complex challenges. Eventually, self-teaching systems will be used to solve more pressing problems, such as protein folding to conjure up new medicines and biotechnologies, figuring out ways to reduce energy consumption, or when we need to design new materials. A highly generalized self-learning system could also be tasked with improving itself, leading to artificial general intelligence (i.e. a very human-like intelligence) and even artificial superintelligence.

As the DeepMind researchers conclude in their study, “Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.

And indeed, now that human players are no longer dominant in games like chess and Go, it can be said that we’ve already entered into the era of superintelligence. This latest breakthrough is the tiniest hint of what’s still to come.

[Nature]

ORIGINAL: Gizmodo 
By George Dvorsky 
2017/10/18

jueves, 14 de septiembre de 2017

IBM Makes Breakthrough in Race to Commercialize Quantum Computers


Photographer: David Paul Morris
Researchers at International Business Machines Corp. have developed a new approach for simulating molecules on a quantum computer.

The breakthrough, outlined in a research paper to be published in the scientific journal Nature Thursday, uses a technique that could eventually allow quantum computers to solve difficult problems in chemistry and electro-magnetism that cannot be solved by even the most powerful supercomputers today.

In the experiments described in the paper, IBM researchers used a quantum computer to derive the lowest energy state of a molecule of beryllium hydride. Knowing the energy state of a molecule is a key to understanding chemical reactions.

In the case of beryllium hydride, a supercomputer can solve this problem, but the standard techniques for doing so cannot be used for large molecules because the number of variables exceeds the computational power of even these machines.

The IBM researchers created a new algorithm specifically designed to take advantage of the capabilities of a quantum computer that has the potential to run similar calculations for much larger molecules, the company said.

The problem with existing quantum computers – including the one IBM used for this research -- is that they produce errors and as the size of the molecule being analyzed grows, the calculation strays further and further from chemical accuracy. The inaccuracy in IBM’s experiments varied between 2 and 4 percent, Jerry Chow, the manager of experimental quantum computing for IBM, said in an interview.

Alan Aspuru-Guzik, a professor of chemistry at Harvard University who was not part of the IBM research, said that the Nature paper is an important step. “The IBM team carried out an impressive series of experiments that holds the record as the largest molecule ever simulated on a quantum computer,” he said.

But Aspuru-Guzik said that quantum computers would be of limited value until their calculation errors can be corrected. “When quantum computers are able to carry out chemical simulations in a numerically exact way, most likely when we have error correction in place and a large number of logical qubits, the field will be disrupted,” he said in a statement. He said applying quantum computers in this way could lead to the discovery of new pharmaceuticals or organic materials.

IBM has been pushing to commercialize quantum computers and recently began allowing anyone to experiment with running calculations on a 16-qubit quantum computer it has built to demonstrate the technology.

In a classical computer, information is stored using binary units, or bits. A bit is either a 0 or 1. A quantum computer instead takes advantage of quantum mechanical properties to process information using quantum bits, or qubits. A qubit can be both a 0 or 1 at the same time, or any range of numbers between 0 and 1. Also, in a classical computer, each logic gate functions independently. In a quantum computer, the qubits affect one another. This allows a quantum computer, in theory, to process information far more efficiently than a classical computer.

The machine IBM used for the Nature paper consisted of seven quibits created from supercooled superconducting materials. In the experiment, six of these quibits were used to map the energy states of the six electrons in the beryllium hydride molecule. Rather than providing a single, precise and accurate answer, as a classical computer does, a quantum computer must run a calculation hundreds of times, with an average used to arrive at a final answer.

Chow said his team is currently working to improve the speed of its quantum computer with the aim of reducing the time it takes to run each calculation from seconds to microseconds. He said they were also working on ways to reduce its error rate.

IBM is not the only company working on quantum computing. Alphabet Inc.’s Google is working toward creating a 50 qubit quantum computer. The company has pledged to use this machine to solve a previously unsolvable calculation from chemistry or electro-magnetism by the end of the year. Also competing to commercialize quantum computing is Rigetti Computing, a startup in Berkeley, California, which is building its own machine, and Microsoft Corp. which is working with an unproven quantum computing architecture that is, in theory, inherently error-free. D-Wave Systems Inc., a Canadian company, is currently the only company to sell


ORIGINAL: Bloomberg
By Jeremy Kahn September 13, 2017