jueves, 28 de enero de 2016

These 4 cosmic phenomena travel faster than the speed of light


Breaking the light barrier.
When Albert Einstein first predicted that light travels the same speed everywhere in our Universe, he essentially stamped a speed limit on it: 299,792 kilometres per second (186,282 miles per second) - fast enough to circle the entire Earth eight times every second. But that's not the whole story. In fact, it's just the beginning.

Before Einstein, mass - the atoms that make up you, me, and everything we see - and energy were treated as separate entities. But in 1905, Einstein forever changed the way physicists view the Universe.

Einstein's special theory of relativity permanently tied mass and energy together in the simple yet fundamental equation E = mc2. This little equation predicts that nothing with mass can move as fast as light, or faster. The closest humankind has ever come to reaching the speed of light is inside of powerful particle accelerators like the Large Hadron Collider and the Tevatron.

These colossal machines accelerate subatomic particles to more than 99.99 percent the speed of light, but as Physics Nobel laureate David Gross explains, these particles will never reach the cosmic speed limit.

To do so would require an infinite amount of energy and, in the process, the object's mass would become infinite, which is impossible. (The reason particles of light, called photons, travel at light speeds is because they have no mass.)

Since Einstein, physicists have found that certain entities can reach superluminal (that means "faster-than-light") speeds and still follow the cosmic rules laid down by special relativity. While these do not disprove Einstein's theory, they give us insight into the peculiar behavior of light and the quantum realm.

The light equivalent of a sonic boom
When objects travel faster than the speed of sound, they generate a sonic boom. So, in theory, if something travels faster than the speed of light, it should produce something like a "luminal boom". In fact, this light boom happens on a daily basis in facilities around the world - you can see it with your own eyes. It's called Cherenkov radiation, and it shows up as a blue glow inside of nuclear reactors, like in the image above.

Cherenkov radiation is named for Soviet scientist Pavel Alekseyevich Cherenkovwho first measured it in 1934 and was awarded the Nobel Physics Prize in 1958 for his discovery.

Cherenkov radiation glows because the core of the Advanced Test Reactor is submerged in water to keep it cool. In water, light travels at 75 percent the speed it would in the vacuum of outer space, but the electrons created by the reaction inside of the core travel through the water faster than the light does.

Particles, like these electrons, that surpass the speed of light in water, or some other medium such as glass, create a shock wave similar to the shock wave from a sonic boom.

When a rocket, for example, travels through air, it generates pressure waves in front that move away from it at the speed of sound, and the closer the rocket reaches that sound barrier, the less time the waves have to move out of the object's path. Once it reaches the speed of sound, the waves bunch up creating a shock front that forms a loud sonic boom.

Similarly, when electrons travel through water at speeds faster than light speed in water, they generate a shock wave of light that sometimes shines as blue light, but can also shine in ultraviolet. While these particles are traveling faster than light does in water, they're not actually breaking the cosmic speed limit of 299,792 kilometres per second (186,282 miles per second).

When the rules don't apply
Keep in mind that Einstein's special theory of relativity states that nothing with mass can go faster than the speed of light, and as far as physicists can tell, the Universe abides by that rule. But what about something without mass?

Photons, by their very nature, cannot exceed the speed of light, but particles of light are not the only massless entity in the universe. Empty space contains no material substance and therefore, by definition, has no mass. "Since nothing is just empty space or vacuum, it can expand faster than light speed since no material object is breaking the light barrier," said theoretical astrophysicist Michio Kaku on Big Think. "Therefore, empty space can certainly expand faster than light."

This is exactly what physicists think happened immediately after the Big Bang during the epoch called inflation, which was first hypothesised by physicists Alan Guth and Andrei Linde in the 1980s. Within a trillionth of a trillionth of a second, the Universe repeatedly doubled in size and as a result, the outer edge of the universe expanded very quickly, much faster than the speed of light.

Quantum entanglement makes the cut
"If I have two electrons close together, they can vibrate in unison, according to the quantum theory," Kaku explains on Big Think. Now, separate those two electrons so that they're hundreds or even thousands of light years apart, and they will keep this instant communication bridge open. (Entanglement)

"If I jiggle one electron, the other electron 'senses' this vibration instantly, faster than the speed of light. Einstein thought that this therefore disproved the quantum theory, since nothing can go faster than light," Kaku wrote.

In fact, in 1935, Einstein, Boris Podolsky and Nathan Rosen, attempted to disprove quantum theory with a thought experiment on what Einstein referred to as "spooky action at a distance".

Ironically, their paper laid the foundation for what today is called the EPR (Einstein-Podolsky-Rosen) paradox, a paradox that describes this instantaneous communication of quantum entanglement - an integral part of some of the world's most cutting-edge technologies, like quantum cryptography.

Dreaming of wormholes
Since nothing with mass can travel faster than light, you can kiss interstellar travel goodbye - at least, in the classical sense of rocketships and flying.

Although Einstein trampled over our aspirations of deep-space roadtrips with his theory of special relativity, he gave us a new hope for interstellar travel with his general theory of relativity in 1915. While special relativity wed mass and energy, general relativity wove space and time together.

"The only viable way of breaking the light barrier may be through general relativity and the warping of space time," Kaku writes. This warping is what we colloquially call a wormhole, which theoretically would let something travel vast distances instantaneously, essentially enabling us to break the cosmic speed limit by traveling great distances in a very short amount of time.

In 1988, theoretical physicist Kip Thorne - the science consultant and executive producer for the recent film Interstellar - used Einstein's equations of general relativity to predict the possibility of wormholes that would forever be open for space travel. But in order to be traversable, these wormholes need some strange, exotic matter holding them open.

"Now it is an amazing fact that exotic matter can exist, thanks to weirdnesses in the laws of quantum physics," Thorne writes in his book The Science of Interstellar.

And this exotic matter has even been made in laboratories here on Earth, but in very tiny amounts. When Thorne proposed his theory of stable wormholes in 1988 he called upon the physics community to help him determine if enough exotic matter could exist in the Universe to support the possibility of a wormhole.

"This triggered a lot of research by a lot of physicists; but today, nearly 30 years later, the answer is still unknown." Thorne writes. At the moment, it's not looking good, "But we are still far from a final answer," he concludes.

This article was originally published by Business Insider.

ORIGINAL: Science Alert
JESSICA ORWIG, BUSINESS INSIDER
21 JAN 2016

Monster Machine Cracks The Game Of Go

Illustration: Google DeepMind/Nature
A computer program has defeated a master of the ancient Chinese game of Go, achieving one of the loftiest of the Grand Challenges of AI at least a decade earlier than anyone had thought possible.

The programmers, at Google’s Deep Mind laboratory, in London, .write in today’s issue of .Nature that their program .AlphaGo defeated .Fan Hui, the European Go champion, 5 games to nil, in a match held last October in the company’s offices. Earlier, the program had won 494 out of 495 games against the best rival Go programs.


AlphaGo’s creators now hope to seal their victory at a 5-game match against .Lee Se-dol, the best Go player in the world. That match, for a $1 million prize fund, is scheduled to take place in March in Seoul, South Korea

The program’s victory marks the rise not merely of the machines but of new methods of computer programming based on self-training neural networks. In support of their claim that this method can be applied broadly, the researchers cited their success, which we .reported a year ago, in getting neural networks to learn how to play an entire set of video games from Atari. Future applications, they say, may include financial trading, climate modeling and medical diagnosis.

Not all of AlphaGo’s skill is self-taught. First, the programmers jumpstarted the training by having the program predict moves in a database of master games. It eventually reached a success rate of 57 percent, versus 44 percent for the best rival programs. 

Then, to go beyond mere human performance, the program conducted its own research through a trial-and-error approach that involved playing millions of games against itself. In this fashion it discovered, one by one, many of the rules of thumb that textbooks have been imparting to Go students for centuries. Google DeepMind calls the self-guided method reinforced learning, but it’s really just another word for “.deep learning,” the current AI buzzword.

Not only can self-trained machines surpass the game-playing powers of their creators, they can do so in ways that programmers can’t even explain. It’s a different world from the one that AI’s founders envisaged decades ago.

Commenting on the death yesterday of AI pioneer Marvin Minksy, Demis Hassabis, the lead author of the Nature paper, said “It would be interesting to see what he would have said,” said Hassabis. “I suspect he would have been pretty surprised at how quickly this has arrived.

That’s because, as programmers would say, Go is such a bear. Then again, even chess was a bear, at first. Back in 1957, the late Herbert Simon famously predicted that a computer would beat the world champion at chess within a decade. But it was only in 1997 that World Chess Champion Garry Kasparov lost to IBM’s Deep Blue—a multimillion-dollar, purpose-built machine that filled a small room. Today you can .download a $100 program to a decently powered laptop and watch it utterly cream any chess player in the world.

Go is harder for machines because the positions are harder to judge and there are a whole lot more positions.

Judgement is harder because the pieces, or “stones,” are all of equal value, whereas those in chess have varying values—a Queen, for instance, is worth nine times more than a pawn, on average. Chess programmers can thus add up those values (and throw in numerical estimates for the placement of pieces and pawns) to arrive at a quick-and-dirty score of a game position. No such expedient exists for Go.

There are vastly more positions to judge than in chess because Go offers on average 10 times more options at every move and there are about three times as many moves in an game. The number of possible board configurations in Go is estimated at 10 to the 170th power—“more than the number of atoms in the universe,” said Hassabis. 

Some researchers .tried to adapt to Go some of the forward-search techniques devised for chess; others .relied on random simulations of games in the aptly named Monte Carlo method. The Google DeepMind people leapfrogged them all with deep, or convolutional, neural networks, so named because they imitate the brain (up to a point).

A neural network links units that are the computing equivalent to a brain’s neurons—first by putting them into layers, then by stacking the layers. AlphaGo’s are 12 layers deep. Each “neuron” connects to its neighbors in its own layer and also those in the layers directly above and below it. A signal sent to one neuron causes it to strengthen or weaken its connections to other ones, so over time, the network changes its configuration.

To train the system

  • you first expose it to input data
  • Next, you test the output signal against the metric you’re using—say, predicting a master’s move—and 
  • reinforce correct decisions by strengthening the underlying connections. 
  • Over time, the system produces better outputs. You might say that it learns.
AlphaGo has two networks

  • The policy network cuts down on the number of moves to look at, and 
  • the evaluation network allows you to cut short the depth of that search,” or the number of moves the machine must look ahead, Hassabis said. 

 “Both neural networks together make the search tractable.”

The main difference from the system that played Atari is the inclusion of a search-ahead function: “In Atari you can do well by reacting quickly to current events,” said Hassabis. “In Go you need a plan.”

After exhaustive training the two networks, taken by themselves, could play Go as well as any program did. But when the researchers coupled the neural networks to a forward-searching algorithm, the machine was able to dominate rival programs completely. Not only did it win all but one of the hundreds of games it played against them, it was even able to give them a handicap of four extra moves, made at the beginning of the game, and still beat them.

About that one defeat: “The search has stochastic [random] element, so there’s always a possibility that it will make a mistake,” David Silver said. “As we improve, we reduce probability of making a mistake, but mistakes happen. As in that one particular game.

Anyone might cock an eyebrow at the claim that AlphaGo will have practical spin-offs. Games programmers have often justified their work by promising such things but so far they’ve had little to show for their efforts. IBM’s Deep Blue did nothing but play chess, and IBM’s Watson—.designed to beat the television game show Jeopardy!—will need laborious retraining to be of service in its .next appointed task of helping doctors diagnose and treat patients.

But AlphaGo’s creators say that because of the generalized nature of their approach, direct spin-offs really will come—this time for sure. And they’ll get started on them just as soon as the March match against the world champion is behind them.


ORIGINAL: IEEE Spectrum
Posted 27 Jan 2016

Scientists Demonstrate Basics of Nucleic Acid Computing Inside Cells

.
DETAILS: Using strands of nucleic acid, scientists have demonstrated basic computing operations inside a living mammalian cell. Shown examining a cellular “AND” gate are associate professor Philip Santangelo and research scientist Chiara Zurla. (Credit: Rob Felt, Georgia Tech)
Using strands of nucleic acid, scientists have demonstrated basic computing operations inside a living mammalian cell. The research could lead to an artificial sensing system that could control a cell’s behavior in response to such stimuli as the presence of toxins or the development of cancer.

The research uses DNA strand displacement, a technology that has been widely used outside of cells for the design of molecular circuits, motors and sensors. Researchers modified the process to provide both “AND” and “OR” logic gates able to operate inside the living cells and interact with native messenger RNA (mRNA).

The tools they developed could provide a foundation for bio-computers able to sense, analyze and modulate molecular information at the cellular level. Supported by the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF), the research was reported December 21 in the journal Nature Nanotechnology.

The whole idea is to be able to take the logic that is used in computers and port that logic into cells themselves,” said .Philip Santangelo, an associate professor in the .Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. “These devices could sense an aberrant RNA, for instance, and then shut down cellular translation or induce cell death.

Strand displacement reactions are the biological equivalent of the switches or gates that form the foundation for silicon-based computing. They can be programmed to turn on or off in response to an external stimuli such as a molecule. An “AND” gate, for example, would switch when both conditions were met, while an “OR” gate would switch when either condition was met.

In the switches the researchers used, a fluorophore reporter molecule and its complementary quenching molecule were placed side-by-side to create an “off” mode. Binding of RNA in one of the strands then displaced a portion of nucleic acid, separating the molecules and allowing generation of a signal that created an “on” mode. Two “on” modes on adjacent nucleic acid strands created an “AND” gate.

Demonstrating individual logic gates is only a first step,” said Georg Seelig, assistant professor of computer science and engineering and electrical engineering at the University of Washington. “In the longer term, we want to expand this technology to create circuits with many inputs, such as those we have constructed in cell-free settings.

The researchers used ligands designed to bind to specific portions of the nucleic acid strands, which can be created as desired and produced by commercial suppliers.

We sensed molecules and showed that we could respond to them,” said Santangelo. “We showed that we could utilize native molecules in the cell as part of the circuit, though we haven’t been able to control a cell yet.

Getting basic computing operations to function inside cells was no easy task, and the research required a number of years to accomplish. Among the challenges were getting the devices into the cells without triggering the switches, providing operation rapid enough to be useful, and not killing the human cell lines that researchers used in the lab.

We had to chemically change the probes to get them to work inside the cell and to make them stable enough inside the cells,” said Santangelo. “We found that these strand displacement reactions can be slow within the cytosol, so to get them to work faster, we built scaffolding onto the messenger RNA that allowed us to amplify the effects.”

The nucleic acid computers ultimately operated as desired, and the next step is to use their switching to trigger the production of signaling chemicals that would prompt the desired reaction from the cells. Cellular activity is normally controlled by the production of proteins, so the nucleic acid switches will have to be given the ability to produce enough signaling molecules to induce a change.

“We need to generate enough of whatever final signal is needed to get the cell to react,” Santangelo explained. “There are amplification methods used in strand displacement technology, but none of them have been used so far in living cells.”

Even without that final step, the researchers feel they’ve built a foundation that can be used to attain the goal.

We were able to design some of the basic logical constructs that could be used as building blocks for future work,” Santangelo said. “We know the concentrations of chemicals and the design requirements for individual components, so we can now start putting together a more complicated set of circuits and components.

Cells, of course, already know how to sense toxic molecules and the development malignant tendencies, and to then take action. But those safeguards can be turned off by viruses or cancer cells that know how to circumvent natural cellular processes.

Our mechanism would just give cells a hand at doing this,” Santangelo said. “The idea is to add to the existing machinery to give the cells enhanced capabilities.”

Applying an engineering approach to the biological world sets this example apart from other efforts to control cellular machinery.

What makes DNA strand displacement circuits unique is that all components are fully rationally designed at the level of the DNA sequence,” said Seelig. “This really makes this technology ideal for an engineering approach. In contrast, many other approaches to controlling the cellular machinery rely on components that are borrowed from biology and are not fully understood.

Beyond those already mentioned, the research team included Benjamin Groves, Yuan-Jyue Chen and Sergii Pochekailov from the University of Washington and Chiara Zurla and Jonathan Kirschman from Georgia Tech and Emory University.

This material is based on work supported by the Defense Advanced Research Projects Agency (DARPA) under contract W911NF-11-2-0068 and by National Science Foundation CAREER award 1253691. The content is solely the responsibility of the authors and does not necessarily represent the official views of DARPA or the NSF.

.
Image shows activation of “AND” gates in cells as observed by fluorescence microscopy.
(Credit: Chiara Zurla, Georgia Tech)
CITATION: Benjamin Groves, et al., “Computing in mammalian cells with nucleic acid strand exchange,” (Nature Nanotechnology, 2015)..http://dx.doi.org/10.1038/nnano.2015.278

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia 30332-0181 USA

Media Relations Contact: John Toon (404-894-6986) (.joon@gatech.edu).
Writer: John Toon


ORIGINAL: .Geogia Tech
January 19, 2016

jueves, 21 de enero de 2016

Brain waves may be spread by weak electrical field

The research team says the electrical fields could be behind the spread of sleep and theta waves, along with epileptic seizure waves (Credit:Shutterstock)
Mechanism tied to waves associated with epilepsy
Researchers at Case Western Reserve University may have found a new way information is communicated throughout the brain.

Their discovery could lead to identifying possible new targets to investigate brain waves associated with memory and epilepsy and better understand healthy physiology.

They recorded neural spikes traveling at a speed too slow for known mechanisms to circulate throughout the brain. The only explanation, the scientists say, is the wave is spread by a mild electrical field they could detect. Computer modeling and in-vitro testing support their theory.

"Others have been working on such phenomena for decades, but no one has ever made these connections," said Steven J. Schiff, director of the Center for Neural Engineering at Penn State University, who was not involved in the study. "The implications are that such directed fields can be used to modulate both pathological activities, such as seizures, and to interact with cognitive rhythms that help regulate a variety of processes in the brain."

Scientists Dominique Durand, Elmer Lincoln Lindseth Professor in Biomedical Engineering at Case School of Engineering and leader of the research, former graduate student Chen Sui and current PhD students Rajat Shivacharan and Mingming Zhang, report their findings in The Journal of Neuroscience.

"Researchers have thought that the brain's endogenous electrical fields are too weak to propagate wave transmission," Durand said. "But it appears the brain may be using the fields to communicate without synaptic transmissions, gap junctions or diffusion."

How the fields may work
Computer modeling and testing on mouse hippocampi (the central part of the brain associated with memory and spatial navigation) in the lab indicate the field begins in one cell or group of cells.

Although the electrical field is of low amplitude, the field excites and activates immediate neighbors, which, in turn, excite and activate immediate neighbors, and so on across the brain at a rate of about 0.1 meter per second.

Blocking the endogenous electrical field in the mouse hippocampus and increasing the distance between cells in the computer model and in-vitro both slowed the speed of the wave.

These results, the researchers say, confirm that the propagation mechanism for the activity is consistent with the electrical field.

Because sleep waves and theta waves--which are associated with forming memories during sleep--and epileptic seizure waves travel at about 1 meter per second, the researchers are now investigating whether the electrical fields play a role in normal physiology and in epilepsy.

If so, they will try to discern what information the fields may be carrying. Durand's lab is also investigating where the endogenous spikes come from.

ORIGINAL: Eurkalert
14-JAN-2016

Memory capacity of brain is 10 times more than previously thought

Data from the Salk Institute shows brain’s memory capacity is in the petabyte range, as much as entire Web

LA JOLLA—Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.

"This is a real bombshell in the field of neuroscience,said Terry Sejnowski from the Salk Institute for Biological Studies. "Our new measurements of the brain's memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (215 Bytes = 1000 TeraBytes), in the same ballpark as the World Wide Web."

Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.



When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.

Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses—with more surface area and vesicles of neurotransmitters—are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.

The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.

At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.

In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. Credit: Salk Institut
To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.

The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.

"We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about 8 percent different in size,said Tom Bartol, one of the scientists. "No one thought it would be such a small difference. This was a curveball from nature."

Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.

But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.

Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.

"This is roughly an order of magnitude of precision more than anyone has ever imagined,said Sejnowski

What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.

We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.

The researchers calculated that 
  • for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and 
  • for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.
"This means that every 2 or 20 minutes, your synapses are going up or down to the next size," said Bartol. "The synapses are adjusting themselves according to the signals they receive."

From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. Credit: Salk Institute
Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.

"The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us."

The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultra-precise but energy-efficient computers, particularly ones that employ deep learning and neural nets techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.

"This trick of the brain absolutely points to a way to design better computers,"said Sejnowski. "Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains."

Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.

The work was supported by the NIH and the Howard Hughes Medical Institute.

ORIGINAL: Salk.edu
January 20, 2016

miércoles, 20 de enero de 2016

IU scientists create 'nano-reactor' for the production of hydrogen biofuel

Combining bacterial genes and virus shell creates a highly efficient, renewable material used in generating power from water

BLOOMINGTON, Ind. -- Scientists at Indiana University have created a highly efficient biomaterial that catalyzes the formation of hydrogen -- one half of the "holy grail" of splitting H2O to make hydrogen and oxygen for fueling cheap and efficient cars that run on water.

A modified enzyme that gains strength from being protected within the protein shell -- or "capsid" -- of a bacterial virus, this new material is 150 times more efficient than the unaltered form of the enzyme.
An artist's rendering of P22-Hyd, a new biomaterial created by encapsulating a hydrogen-producing enzyme within a virus shell.
Photo by Trevor Douglas
 The process of creating the material was recently reported in "Self-assembling biomolecular catalysts for hydrogen production" in the journal Nature Chemistry.

"Essentially, we've taken a virus's ability to self-assemble myriad genetic building blocks and incorporated a very fragile and sensitive enzyme with the remarkable property of taking in protons and spitting out hydrogen gas," said Trevor Douglas, the Earl Blough Professor of Chemistry in the IU Bloomington College of Arts and Sciences' Department of Chemistry, who led the study. "The end result is a virus-like particle that behaves the same as a highly sophisticated material that catalyzes the production of hydrogen."
Trevor Douglas | Photo by Montana State University
Other IU scientists who contributed to the research were Megan C. Thielges, an assistant professor of chemistry; Ethan J. Edwards, a Ph.D. student; and Paul C. Jordan, a postdoctoral researcher at Alios BioPharma, who was an IU Ph.D. student at the time of the study.

The genetic material used to create the enzyme, hydrogenase, is produced by two genes from the common bacteria Escherichia coli, inserted inside the protective capsid using methods previously developed by these IU scientists. The genes, hyaA and hyaB, are two genes in E. coli that encode key subunits of the hydrogenase enzyme. The capsid comes from the bacterial virus known as bacteriophage P22.
Illustration showing the release of NiFe-hydrogenase from inside the virus shell, or "capsid," of bacteriophage P22.
Photo by Trevor Douglas
The resulting biomaterial, called "P22-Hyd," is not only more efficient than the unaltered enzyme but also is produced through a simple fermentation process at room temperature.

The material is potentially far less expensive and more environmentally friendly to produce than other materials currently used to create fuel cells. The costly and rare metal platinum, for example, is commonly used to catalyze hydrogen as fuel in products such as high-end concept cars.

"This material is comparable to platinum, except it's truly renewable," Douglas said. "You don't need to mine it; you can create it at room temperature on a massive scale using fermentation technology; it's biodegradable. It's a very green process to make a very high-end sustainable material."

In addition, P22-Hyd both breaks the chemical bonds of water to create hydrogen and also works in reverse to recombine hydrogen and oxygen to generate power. "The reaction runs both ways -- it can be used either as a hydrogen production catalyst or as a fuel cell catalyst," Douglas said.

The form of hydrogenase is one of three occurring in nature: di-iron (FeFe)-, iron-only (Fe-only)- and nickel-iron (NiFe)-hydrogenase. The third form was selected for the new material due to
  •  its ability to easily integrate into biomaterials and 
  • tolerate exposure to oxygen.
NiFe-hydrogenase also gains significantly greater resistance upon encapsulation to breakdown from chemicals in the environment, and it retains the ability to catalyze at room temperature. Unaltered NiFe-hydrogenase, by contrast, is highly susceptible to destruction from chemicals in the environment and breaks down at temperatures above room temperature -- both of which make the unprotected enzyme a poor choice for use in manufacturing and commercial products such as cars.

These sensitivities are "some of the key reasons enzymes haven't previously lived up to their promise in technology," Douglas said. Another is their difficulty to produce.

"No one's ever had a way to create a large enough amount of this hydrogenase despite its incredible potential for biofuel production. But now we've got a method to stabilize and produce high quantities of the material -- and enormous increases in efficiency," he said.

The development is highly significant according to Seung-Wuk Lee, professor of bioengineering at the University of California-Berkeley, who was not a part of the study.

"Douglas' group has been leading protein- or virus-based nanomaterial development for the last two decades. This is a new pioneering work to produce green and clean fuels to tackle the real-world energy problem that we face today and make an immediate impact in our life in the near future,” said Lee, whose work has been cited in a U.S. Congressional report on the use of viruses in manufacturing.

Beyond the new study, Douglas and his colleagues continue to craft P22-Hyd into an ideal ingredient for hydrogen power by investigating ways to activate a catalytic reaction with sunlight, as opposed to introducing elections using laboratory methods.

"Incorporating this material into a solar-powered system is the next step," Douglas said.

This research was supported by the U.S. Department of Energy.

Jan. 4, 2016

martes, 19 de enero de 2016

Bridging the Bio-Electronic Divide

New effort aims for fully implantable devices able to connect with up to one million neurons


A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.

Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including 
  • neuroscience, 
  • synthetic biology, 
  • low-power electronics, 
  • photonics, 
  • medical device packaging and manufacturing, systems engineering, and 
  • clinical testing
In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.

To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on www.fbo.gov.

NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit:http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.

ORIGINAL: DARPA
OUTREACH@DARPA.MIL
1/19/2016

Why Spiderman can't exist: Geckos are size limit for sticking to walls

Latest research reveals why geckos are the largest animals able to scale smooth vertical walls - even larger climbers would require unmanageably large sticky footpads. Scientists estimate that a human would need adhesive pads covering 40% of their body surface in order to walk up a wall like Spiderman, and believe their insights have implications for the feasibility of large-scale, gecko-like adhesives.

Source: The Amazing Spiderman Game
A new study, published today in PNAS ("Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing"), shows that in climbing animals from mites and spiders up to tree frogs and geckos, the percentage of body surface covered by adhesive footpads increases as body size increases, setting a limit to the size of animal that can use this strategy because larger animals would require impossibly big feet.

Dr David Labonte and his colleagues in the University of Cambridge's Department of Zoology found that tiny mites use approximately 200 times less of their total body area for adhesive pads than geckos, nature's largest adhesion-based climbers. And humans? We'd need about 40% of our total body surface, or roughly 80% of our front, to be covered in sticky footpads if we wanted to do a convincing Spiderman impression.
This image shows a gecko and ant. (Image courtesy of A Hackmann and D Labonte)
Once an animal is big enough to need a substantial fraction of its body surface to be covered in sticky footpads, the necessary morphological changes would make the evolution of this trait impractical, suggests Labonte.

"If a human, for example, wanted to walk up a wall the way a gecko does, we'd need impractically large sticky feet - our shoes would need to be a European size 145 or a US size 114," says Walter Federle, senior author also from Cambridge's Department of Zoology.

The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.

As animals increase in size, the amount of body surface area per volume decreases – an ant has a lot of surface area and very little volume, and an elephant is mostly volume with not much surface area” explains Labonte.

This poses a problem for larger climbing animals because, when they are bigger and heavier, they need more sticking power, but they have comparatively less body surface available for sticky footpads. This implies that there is a maximum size for animals climbing with sticky footpads – and that turns out to be about the size of a gecko.

Huge Lizard Caught On The Side Of Australian House. photo credit: Eric Holland
The researchers compared the weight and footpad size of 225 climbing animal species including insects, frogs, spiders, lizards and even a mammal.

We covered a range of more than seven orders of magnitude in body weight, which is roughly the same weight difference as between a cockroach and Big Ben” says Labonte.

How sticky footpad area changes with size. (Image: David Labonte)
“Although we were looking at vastly different animals – a spider and a gecko are about as different as a human is to an ant – their sticky feet are remarkably similar,” says Labonte.

Adhesive pads of climbing animals are a prime example of convergent evolution – where multiple species have independently, through very different evolutionary histories, arrived at the same solution to a problem. When this happens, it’s a clear sign that it must be a very good solution.

There is one other possible solution to the problem of how to stick when you’re a large animal, and that’s to make your sticky footpads even stickier.

We noticed that within some groups of closely related species pad size was not increasing fast enough to match body size yet these animals could still stick to walls,” says Christofer Clemente, a co-author from the University of the Sunshine Coast.

We found that tree frogs have switched to this second option of making pads stickier rather than bigger. It’s remarkable that we see two different evolutionary solutions to the problem of getting big and sticking to walls,” says Clemente.

Across all species the problem is solved by evolving relatively bigger pads, but this does not seem possible within closely related species, probably since the required morphological changes would be too large. Instead within these closely related groups, the pads get stickier in larger animals, but the underlying mechanisms are still unclear. This is a great example of evolutionary constraint and innovation.
Diversity of sticky footpads. (Image: David Labonte) 
The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.

Our study emphasises the importance of scaling for animal adhesion, and scaling is also essential for improving the performance of adhesives over much larger areas. There is a lot of interesting work still to be done looking into the strategies that animals use to make their footpads stickier - these would likely have very useful applications in the development of large-scale, powerful yet controllable adhesives,” says Labonte.

Source: Cambridge University

ORIGINAL: Nanowerk
Jan 19, 2016