sábado, 21 de octubre de 2017

Miniature water droplets could solve an origin-of-life riddle, Stanford researchers find

Before life could begin, something had to kickstart the production of critical molecules. That something may have been as simple as a mist made up of tiny drops of water.

It is one of the great ironies of biochemistry:  life on Earth could not have begun without water; yet water stymies some chemical reactions necessary for life itself.
Chemistry Professor Richard Zare(Image credit: L.A. Cicero)
Now, researchers report today in Proceedings of the National Academy of Sciences, they have found a novel, even poetic solution to the so-called “water problem” in the form of miniature droplets of water, formed perhaps in the mist of a crashing ocean wave or the clouds in the sky.

The water problem relates primarily to the element phosphorous, which is attached to a variety of life’s molecules through a process called phosphorylation. “You and I are alive because of phosphorus and phosphorylation,” said Richard Zare, a professor of chemistry and one of the paper’s senior authors. “You can’t have life without phosphorous.”

The water problem
Phosphorous is a necessary ingredient in many molecules critical for life, 
  • including our DNA, 
  • it’s relative RNA and 
  • in the molecule that makes up our body’s energy storage system, called ATP. 
But ordinarily water gets in the way of producing those chemicals. Modern life has evolved ways of sidestepping that problem in the form of enzymes that help phosphorylation along. But how primitive components of these molecules formed before the workarounds evolved remains a controversial and at times slightly oddball subject. Among the proposed solutions are highly reactive forms of extraterrestrial phosphorous and heating powered by naturally occurring nuclear reactions.

Microdroplets solve the phosphorylation problem in a relatively elegant way, in large part because they have geometry on their side. It turns out that water is mostly a problem when the phosphate is floating around inside a pool of water or a primitive ocean, rather than on its surface.

Microdroplets are mostly surface. They perfectly optimize the need for life to form in and around water, but with enough surface area for phosphorylation and other reactions to occur.

In fact, the large amount of surface area provided by microdroplets is already known to be a great place for chemistry. Previous experiments suggest microdroplets can increase reaction rates for other processes by a thousand or even a million times, depending on the details of the reaction being studied.

Spontaneous molecules
Microdroplets seemed like a possible solution to the water problem. But to show that they really work, Zare and his colleagues sprayed tiny droplets of water, laced with phosphorous and other chemicals, into a chamber where the resulting compounds could be analyzed. They found several phosphate-containing molecules occurred spontaneously on these lab-made microdroplets without any catalyst to get them started. Those molecules included sugar phosphates, which are a step in how our cells create energy, and one of the molecules that make up RNA, a DNA relative that primitive organisms use to carry their genetic code. Both reactions are rare at best in larger volumes of water.

That observation, joined with the fact that microdroplets are ubiquitous – from clouds in the sky to the mist created by a crashing ocean wave – suggests that they could have played a role in fostering life on Earth. In the future, Zare hopes to look for phosphates that make up proteins and other molecules.

Even if he can produce those compounds, however, Zare does not believe he and his colleagues will have found the one true solution to the origin of life. “I don’t think we’re going to understand exactly how life began on Earth,” said Zare, who is also the Marguerite Blake Wilbur Professor in Natural Science. Essentially, he said, that is because no one can go back in time to watch what happened as life emerged and there is no good fossil record for the formation of biomolecules. “But we could understand some of the possibilities,” he added.

Zare is also a member of the Stanford Cardiovascular Institute, the Stanford Cancer Institute, the Stanford Neurosciences Institute and the Stanford Woods Institute for the Environment. Additional Stanford authors are postdoctoral fellows Inho Nam and Jae Kyoo Lee. Hong Gil Nam of DGIST in South Korea is co-senior author with Zare. The work was supported by the Institute for Basic Science (South Korea) and the U. S. Air Force Office of Scientific Research through a Basic Research Initiative grant.


ORIGINAL: Stanford News
BY NATHAN COLLINS
OCTOBER 20, 2017

New Research Points to a Genetic Switch That Can Let Our Bodies Talk to Electronics


Shutterstock
IN BRIEF
Our bodies are biologically based and therefore are not equipped to communicate with electronics efficiently. New research could make it possible to genetically engineer our cells to be able to communicate with electronics.

The development has the potential to allow us to eventually build apps that autonomously detect and treat disease.

Microelectronics has transformed our lives. Cellphones, earbuds, pacemakers, defibrillators – all these and more rely on microelectronics’ very small electronic designs and components. Microelectronics has changed the way we collect, process and transmit information.

Such devices, however, rarely provide access to our biological world; there are technical gaps. We can’t simply connect our cellphones to our skin and expect to gain health information. For instance, is there an infection? What type of bacteria or virus is involved? We also can’t program the cellphone to make and deliver an antibiotic, even if we knew whether the pathogen was Staph or Strep. There’s a translation problem when you want the world of biology to communicate with the world of electronics.

The research we’ve just published with colleagues in Nature Communications brings us one step closer to closing that communication gap.
Electronic control of gene expression and cell behaviour in Escherichia coli through redox signalling

ABSTRACT:
The ability to interconvert information between electronic and ionic modalities has transformed our ability to record and actuate biological function. Synthetic biology offers the potential to expand communication ‘bandwidth’ by using biomolecules and providing electrochemical access to redox-based cell signals and behaviours. While engineered cells have transmitted molecular information to electronic devices, the potential for bidirectional communication stands largely untapped. Here we present a simple electrogenetic device that uses redox biomolecules to carry electronic information to engineered bacterial cells in order to control transcription from a simple synthetic gene circuit. Electronic actuation of the native transcriptional regulator SoxR and transcription from the PsoxS promoter allows cell response that is quick, reversible and dependent on the amplitude and frequency of the imposed electronic signals. Further, induction of bacterial motility and population based cell-to-cell communication demonstrates the versatility of our approach and potential to drive intricate biological behaviours.

Source: NATURE COMMS
Rather than relying on the usual molecular signals, like hormones or nutrients, that control a cell’s gene expression, we created a synthetic “switching” system in bacterial cells that recognizes electrons instead. This new technology – a link between electrons and biology – may ultimately allow us to program our phones or other microelectronic devices to autonomously detect and treat disease.

COMMUNICATING WITH ELECTRONS, NOT MOLECULES
One of the barriers scientists have encountered when trying to link microelectronic devices with biological systems has to do with information flow. In biology, almost all activity is made possible by the transfer of molecules like
  • glucose, 
  • epinephrine, 
  • cholesterol and 
  • insulin 
signaling between cells and tissues. Infecting bacteria secrete molecular toxins and attach to our skin using molecular receptors. To treat an infection, we need to detect these molecules to identify the bacteria, discern their activities and determine how to best respond.

Microelectronic devices don’t process information with molecules. A microelectronic device typically has silicon, gold, chemicals like boron or phosphorus and an energy source that provides electrons. By themselves, they’re poorly suited to engage in molecular communication with living cells.

Free electrons don’t exist in biological systems so there’s almost no way to connect with microelectronics. There is, however, a small class of molecules that stably shuttle electrons. These are called “redox” molecules; they can transport electrons, sort of like wire does. The difference is that in wire, the electrons can flow freely to any location within; redox molecules must undergo chemical reactions – oxidation or reduction reactions – to “hand off” electrons.
Bacteria are engineered to respond to a redox molecule activated by an electrode by creating an electrogenetic switch. Bentley and Payne, CC BY-ND

TURNING CELLS ON AND OFF
Capitalizing on the electronic nature of redox molecules, we genetically engineered bacteria to respond to them. We focused on redox molecules that could be “programmed” by the electrode of a microelectronic device. The device toggles the molecule’s oxidation state – it’s either 
  • oxidized (loses an electron) or 
  • reduced (gains an electron). 
The electron is supplied by a typical energy source in electronics like a battery.

We wanted our bacteria cells to turn “on” and “off” due to the applied voltage – voltage that oxidized a naturally occurring redox molecule, pyocyanin.

Electrically oxidizing pyocyanin allowed us to control our engineered cells, turning them on or off so they would synthesize (or not) a fluorescent protein. We could rapidly identify what was happening in these cells because the protein emits a green hue.
(a) Device-mediated electronic input consists of applied potential (blue or red step functions) for controlling the oxidation state of redox-mediators (transduced input). Redox mediators intersect with cells to actuate transcription and, depending on actuated gene-of-interest, control biological output.
(b) The electrogenetic device consists of the region encompassing the gene coding for the SoxR protein and the divergent overlapping PsoxR/PsoxS promoters. A gene of interest is placed downstream of the PsoxS promoter. Pyo (O) initiates gene induction and Fcn(R/O), through interactions with respiratory machinery, allows electronic control of induction level. Fcn (R/O), ferro/ferricyanide; Pyo, pyocyanin. The oxidation state of both redox mediators is colorimetrically indicated (Fcn (O) is yellow pentagon; Fcn (R) is white pentagon; Pyo (O) is blue hexagon; Pyo (R) is grey hexagon). Encircled ‘e−‘ and arrows indicate electron movement.

In another example
, we made bacteria that, when switched on, would swim from a stationary position. Bacteria normally swim in starts and stops referred to as a “run” or a “tumble.” The “run” ensures they move in a straight path. When they “tumble,” they essentially remain in a one spot. A protein called CheZ controls the “run” portion of bacteria’s swimming activity. Our electrogenetic switch turned on the synthesis of CheZ, so that the bacteria could move forward.
Bacteria can naturally join forces as biofilms and work together. CDC/Janice Carr, CC BY
We were also able to electrically signal a community of cells to exhibit collective behavior. We made cells with switches controlling the synthesis of a signaling molecule that diffuses to neighboring cells and, in turn, causes changes in their behavior. Electric current turned on cells that, in turn, “programmed” a natural biological signaling process to alter the behavior of nearby cells. We exploited bacterial quorum sensing – a natural process where bacterial cells “talk” to their neighbors and the collection of cells can behave in ways that benefit the entire community.

Perhaps even more interesting, our groups showed that we could both turn on gene expression and turn it off. By reversing the polarity on the electrode, the oxidized pyocyanin becomes reduced – its inactive form. Then, the cells that were turned on were engineered to quickly revert back to their original state. In this way, the group demonstrated the ability to cycle the electrically programmed behavior on and off, repeatedly.

Interestingly, the on and off switch enabled by pyocyanin was fairly weak. By including another redox molecule, ferricyanide, we found a way to amplify the entire system so that the gene expression was very strong, again on and off. The entire system was robust, repeatable and didn’t negatively affect the cells.

SENSING AND RESPONDING ON A CELLULAR LEVEL
Armed with this advance, devices could potentially electrically stimulate bacteria to make therapeutics and deliver them to a site. For example, imagine swallowing a small microelectronic capsule that could record the presence of a pathogen in your GI tract and also contain living bacterial factories that could make an antimicrobial or other therapyall in a programmable autonomous system.

This current research ties into previous work done here at the University of Maryland where researchers had discovered ways to “record” biological information, by sensing the biological environment, and based on the prevailing conditions, “write” electrons to devices. We and our colleagues “sent out” redox molecules from electrodes, let those molecules interact with the microenvironment near the electrode and then drew them back to the electrode so they could inform the device on what they’d seen. This mode of “molecular communication” is somewhat analogous to sonar, where redox molecules are used instead of sound waves.

These molecular communication efforts were used to identify pathogens, monitor the “stress” in blood levels of individuals with schizophrenia and even determine the differences in melanin from people with red hair. For nearly a decade, the Maryland team has developed methodologies to exploit redox molecules to interrogate biology by directly writing the information to devices with electrochemistry.

Perhaps it is now time to integrate these technologies:

  • Use molecular communication to sense biological function and transfer the information to a device. 
  • Then use the device – maybe a small capsule or perhaps even a cellphone – to program bacteria to make chemicals and other compounds that issue new directions to the biological system. 

It may sound fantastical, many years away from practical uses, but our team is working hard on such valuable applications…stay tuned!

ORIGINAL: Futurism

miércoles, 18 de octubre de 2017

Stunning AI Breakthrough Takes Us One Step Closer to the Singularity

As a new Nature paper points out, “There are an astonishing 10 to the power of 170 possible board configurations in Go—more than the number of atoms in the known universe.” (Image: DeepMind)
Remember AlphaGo, the first artificial intelligence to defeat a grandmaster at Go?
Well, the program just got a major upgrade, and it can now teach itself how to dominate the game without any human intervention. But get this: In a tournament that pitted AI against AI, this juiced-up version, called AlphaGo Zero, defeated the regular AlphaGo by a whopping 100 games to 0, signifying a major advance in the field. Hear that? It’s the technological singularity inching ever closer.

A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself. The only input it had was what it does to the positions of the black and white pieces on the board.
  • In addition to devising completely new strategies
  • the new system is also considerably leaner and meaner than the original AlphaGo.
Lee Sedol getting crushed by AlphaGo in 2016. (Image: AP)
Now, every once in a while the field of AI experiences a “holy shit” moment, and this would appear to be one of those moments. Looking back, other “holy shit” moments include:
This latest achievement qualifies as a “holy shit” moment for a number of reasons.

First of all, the original AlphaGo had the benefit of learning from literally thousands of previously played Go games, including those played by human amateurs and professionals. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. Using “reinforcement learning,” AGZ played itself over and over again, “starting from random play, and without any supervision or use of human data,” according to the Google-owned DeepMind researchers in their study. This allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from experience. This basically means that AlphaGo Zero was its own teacher.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” notes the DeepMind team in a release. “Instead, it is able to learn tabula rasa [from a clean slate] from the strongest player in the world: AlphaGo itself.

When playing Go, the system considers the most probable next moves (a “policy network”), and then estimates the probability of winning based on those moves (its “value network”). AGZ requires about 0.4 seconds to make these two assessments. The original AlphaGo was equipped with a pair of neural networks to make similar evaluations, but for AGZ, the Deepmind developers merged the policy and value networks into one, allowing the system to learn more efficiently. What’s more, the new system is powered by four tensor processing units (TPUS)—specialized chips for neural network training. Old AlphaGo needed 48 TPUs.

After just three days of self-play training and a total of 4.9 million games played against itself, AGZ acquired the expertise needed to trounce AlphaGo (by comparison, the original AlphaGo had 30 million games for inspiration). After 40 days of self-training, AGZ defeated another, more sophisticated version of AlphaGo called AlphaGo “Master” that defeated the world’s best Go players and the world’s top ranked Go player, Ke Jie. Earlier this year, both the original AlphaGo and AlphaGo Master won a combined 60 games against top professionals. The rise of AGZ, it would now appear, has made these previous versions obsolete.

The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.

This is a major achievement for AI, and the subfield of reinforcement learning in particular. By teaching itself, the system matched and exceeded human knowledge by an order of magnitude in just a few days, while also developing 
  • unconventional strategies and 
  • creative new moves
For Go players, the breakthrough is as sobering as it is exciting; they’re learning things from AI that they could have never learned on their own, or would have needed an inordinate amount of time to figure out.
[AlphaGo Zero’s] games against AlphaGo Master will surely contain gems, especially because its victories seem effortless,” wrote Andy Okun and Andrew Jackson, members of the American Go Association, in a Nature News and Views article. “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic... The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.”
No doubt, AGZ represents a disruptive advance in the world of Go, but what about its potential impact on the rest of the world? According to Nick Hynes, a grad student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), it’ll be a while before a specialized tool like this will have an impact on our daily lives.

So far, the algorithm described only works for problems where there are a countable number of actions you can take, so it would need modification before it could be used for continuous control problems like locomotion [for instance],” Hynes told Gizmodo. “Also, it requires that you have a really good model of the environment. In this case, it literally knows all of the rules. That would be as if you had a robot for which you could exactly predict the outcomes of actions—which is impossible for real, imperfect physical systems.

The nice part, he says, is that there are several other lines of AI research that address both of these issues (e.g. machine learning, evolutionary algorithms, etc.), so it’s really just a matter of integration. “The real key here is the technique,” says Hynes.

It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel...Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.
As expected—and desired—we’re moving farther away from the classic pattern of getting a bunch of human-labeled data and training a model to imitate it,” he said. “What we’re seeing here is a model free from human bias and presuppositions: It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same. It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel,” to which he added: “Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.

Noam Brown, a Carnegie Mellon University computer scientist who helped to develop the first AI to defeat top humans in no-limit poker, says the DeepMind researchers have achieved an impressive result, and that it could lead to bigger, better things in AI.

While the original AlphaGo managed to defeat top humans, it did so partly by relying on expert human knowledge of the game and human training data,” Brown told Gizmodo. “That led to questions of whether the techniques could extend beyond Go. AlphaGo Zero achieves even better performance without using any expert human knowledge. It seems likely that the same approach could extend to all perfect-information games [such as chess and checkers]. This is a major step toward developing general-purpose AIs.

As both Hynes and Brown admit, this latest breakthrough doesn’t mean the technological singularity—that hypothesized time in the future when greater-than-human machine intelligence achieves explosive growth—is imminent. But it should cause pause for thought. Once 
  • we teach a system the rules of a game or 
  • the constraints of a real-world problem, 
the power of reinforcement learning makes it possible to simply press the start button and let the system do the rest. It will then figure out the best ways to succeed at the task, devising solutions and strategies that are beyond human capacities, and possibly even human comprehension.

As noted, AGZ and the game of Go represent an oversimplified, constrained, and highly predictable picture of the world, but in the future, AI will be tasked with more complex challenges. Eventually, self-teaching systems will be used to solve more pressing problems, such as protein folding to conjure up new medicines and biotechnologies, figuring out ways to reduce energy consumption, or when we need to design new materials. A highly generalized self-learning system could also be tasked with improving itself, leading to artificial general intelligence (i.e. a very human-like intelligence) and even artificial superintelligence.

As the DeepMind researchers conclude in their study, “Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.

And indeed, now that human players are no longer dominant in games like chess and Go, it can be said that we’ve already entered into the era of superintelligence. This latest breakthrough is the tiniest hint of what’s still to come.

[Nature]

ORIGINAL: Gizmodo 
By George Dvorsky 
2017/10/18