lunes, 31 de agosto de 2015

Think Like a Tree: What We Can Learn From the Oaks That Survived Katrina

Ten years ago this week, Hurricane Katrina ripped through New Orleans and the Gulf Coast, bringing floods and gale-force winds that devastated the region and displaced more than a million people. But New Orleans’ live oaks were surprisingly resilient, as biologist Janine Benyus describes in our first episode of a new video series on biomimicry, Think Like a Tree

As the tallest living things on earth, trees have developed strategies to protect themselves against threats to their leaved towers. In the process, they’ve “managed to solve daunting problems of engineering,” says Steven Vogel, a Duke biologist who studies the ways organisms structure themselves in moving fluids. 

Take the beating a tree gets from a hurricane. Gale force winds hammer trees with a dynamic collection of blows, which unleashes “a suite of mechanical problems that would give an engineer nightmares,” Vogel says. Beyond withstanding high wind speeds, trees need to deal with wind acceleration and the air’s “throw weight”—its mass, basically. Calms between gusts can be damaging, too, as the tree rebounds and sways, potentially building up heavy loads on branches and roots. Not to mention the litany of other environmental factors that come into play during a storm: precipitation levels, soil conditions, the state of the surrounding trees

So, what’s a tree to do?
Leaves that work great for photosynthesizing become liabilities in high wind, Vogel says, where they act like little sails with a lot of drag. So in strong, 40 mph winds, the leaves of trees like maple, poplar, and holly will reconfigure into more aerodynamic shapes: curling up into little tubes, clumping together into cones, or flattening to reduce drag. And strong root systems serve as a countermeasure to the drag of the leaves and the wind’s sideways force. 

Trees might be silent, brilliant engineers, but Vogel cautions that they may not be the best candidates for biomimicry. Trees operate under certain constraints—they grow all their own material, which takes energy that could be spent on other needs like reproduction. “Nature usually builds to a design criterion of adequate strength,” Vogel says, and that means maximizing whatever will keep the population going. If one tree goes down, that’s okay as long as most of them survive. But we build our cell towers and skyscrapers much more sturdily than they usually need to be, because we want them to work all the time. And we can account for that, Vogel says, thanks to modern engineering. So we’ll stick with our steel beams for now. 


viernes, 21 de agosto de 2015

The world’s first true “smart drug” enhances cognition and is deemed safe by health experts

A perfect start to the day.(Carsten Schertzer/Flickr CC-BY)
Most people looking for a cognitive boost in the morning reach for a cup of coffee or tea. But all caffeine really does is lift up your mood and improve focus, and that is why it isn’t considered a “pure cognitive enhancer.”

There is, however, a real contender for that title: modafnil (also sold as Provigil). This drug—normally used to treat a sleep disorder—may be the world’s first true smart drug, according to a new systematic review. It 
  • enhances attention, 
  • improves learning, and
  •  boosts “fluid intelligence”—which we use to solve problems and think creatively
And it does all that without the addictive qualities of caffeine (also without the delicious variety of drinkable formats, but that’s arguably a small price to pay).

We don’t fully understand how the drug works, but one theory is that it enhances brain activity in areas that manage those skills. The review, published in the journal European Neuropsychopharmacology, considers 24 placebo-controlled studies of healthy, non-sleep-deprived people conducted between 1990 and 2014. Such an analysis overcomes some of the limitations of each of the smaller studies, such as narrow demographics or conflicting results, and draws an overarching conclusion.

Modafnil has been around for a long time, and its off-label use as smart drug is well-known in some circles. It is increasingly used by students across US and UK universities. A 2008 poll of readers of the science journal Nature, for example, found that nearly half admitted to using modafnil as a cognitive enhancer.

What’s lacking is long-term data—important because study of other promising enhancers has shown that the effect may not last over time. Crucially, however, the new systematic review deems modafnil safe for widespread use. Some previous studies had shown that modafnil led to a small drop in creativity in highly creative people, but the new review says that those negative effects are not seen consistently.

The use of cognitive enhancers is seen by many as cheating, and it is often compared to doping in sports. However, Joao Fabiano, a researcher at the University of Oxford, argues that modafnil’s use should not be seen any differently than caffeine’s. If anything, given that modafnil does more than caffeine, without the downside of addiction, perhaps we should put down that double shot of espresso and take a pill instead.

The cropped image is provided by Carsten Schertzer on Flickr under a CC-BY license.


IBM’S ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart

At a lab near San Jose, IBM has built the digital equivalent of a rodent brain---roughly speaking. It spans 48 of the company's experimental TrueNorth chips, a new breed of processor that mimics the brain's biological building blocks. IBM
DHARMENDRA MODHA WALKS me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.

He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.

Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth.

'We want to get as close to the brain as possible while maintaining flexibility.'DHARMENDRA MODHA, IBM

Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.

What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.

The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.

The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.

That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.

The Brain on a Phone
Peter Diehl recently took a trip to China, where his smartphone didn’t have access to the `net, an experience that cast the limitations of today’s AI in sharp relief. Without the internet, he couldn’t use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google’s distant servers. “The whole system breaks down,” he says.

Deep learning, you see, requires enormous amounts of processing power—processing power that’s typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.

To understand this, you have to understand how deep learning works. It operates in two stages. 
  • First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos. 
  • Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.
Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it’s small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don’t have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. “This is the future,” Mars says. “We’re going to see more of the processing on the devices.”

Neurons, Axons, Synapses, Spikes
Google recently discussed its efforts to run neural networks on phones, but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.

'Silicon operates in a very different way than the stuff our brains are made of.'

The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular “instructions,” the TrueNorth juggles “spikes,” much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone’s voice as they speak—or changes in color from pixel to pixel in a photo. “You can think of it as a one-bit message sent from one neuron to another.” says Rodrigo Alvarez-Icaza, one of the chip’s chief designers.

The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.

Of course, using such a chip also requires a new breed of software. That’s what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM’s research lab in San Jose, California. In some cases, researchers are translating existing code into the “spikes” that the chip can read (and back again). But they’re also working to build native code for the chip.

Parting Gift
Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. “That kind of talk always puts up warning flags,” says Chris Nicholson, the co-founder of deep learning startup Skymind. “Silicon operates in a very different way than the stuff our brains are made of.

Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn’t going to happen anytime soon. “Ambitions must be balanced with reality,” he says.

In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation,” he says. “We want to get as close to the brain as possible while maintaining flexibility.

This is TrueNorth. It’s not a digital brain. But it is a step toward a digital brain. And with IBM’s boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. “Humans use technology to transform society,” Modha says, pointing to the room of researchers. “These are the humans..


jueves, 20 de agosto de 2015

'Artificial Leaf' Reaches Best Level Of Solar Energy Efficiency Yet

Photo credit: Green leaves. jajaladdawan/Shutterstock.

Humans have been struggling for years to create clean, renewable energy that doesn't decimate the planet. What's even more infuriating is that plants, waving gently in the breeze all the while, have been creating 'green' energy before mankind even existed. During plant photosynthesis, water and carbon dioxide is turned into glucose and oxygen. Recently, mankind has been trying to learn from plants to produce our own clean, green machines – in this case, artificial leaves.

The latest advancement in the artificial leaf comes from Monash University in Melbourne and brings us another step closer to a commercially viable method of turning water into fuel. Instead of creating glucose, the artificial leaf uses water and sunlight to produce hydrogen and oxygen. This process of "electrochemical splitting" is achieved by running an electric current through the water. The hydrogen can then be used for fuel production.

The make or break for any energy production technology is the all-important level of efficiency. If the energy output is too low, then artificial leaves will never stand a chance at replacing our current sources of energy, including things such as fossil fuels and nuclear power. In the past, the highest efficiency achieved in an artificial leaf was 18%. However, the scientists from Melbourne have increased this to an impressive 22%, the highest efficiency ever seen in artificial leaves. You can read about the details of this new device in Energy and Environmental Science.

While this level of efficiency is the best yet, it is still not quite good enough to make the process financially viable. However, the researchers note that they are aware of the parameters that need fine-tuning and which components need tweaking for the next generation of tests.

Electrochemical splitting of water could provide a cheap, clean and renewable source of hydrogen as the ultimately sustainable fuel. This latest breakthrough is significant in that it takes us one step further towards this becoming a reality,” Professor Leone Spiccia, the lead researcher, said in a statement. Creating energy without waste is one of the biggest issues the world is facing in the 21st century. Just recently, President Obama set an ambitious goal of reducing emissions by more than 80% by 2050, relative to 2005 levels. This target could be much more easily achieved with the assistance of something like the artificial leaf.

If the artificial leaf can be improved to a marketable level, then we could be seeing forests of them powering our houses, cars and maybe even entire cities. 

Hydrogen can be used to generate electricity directly in fuel cells. Cars driven by fuel cell electric engines are becoming available from a number of car manufacturers. Hydrogen could even be used as an inexpensive energy storage technology at the household level to store energy from roof-top solar cells,Professor Doug MacFarlane, co-author of the study, summarized.

by Caroline Reid
August 19, 2015

Google Won The Internet. Now It Wants to Cure Diseases

Click to Open Overlay Gallery RAFE SWAN/GETTY IMAGES

WHEN GOOGLE CO-FOUNDER Larry Page dropped his now-famous blog post revealing that Google was reorganizing itself as Alphabet, one of the most striking things was what he chose to highlight as the kind of work these newly independent non-Google companies would be pursuing.

The companies that are pretty far afield of our main Internet products [are] contained in Alphabet instead,” Page wrote in the blog post announcing Alphabet’s existence. “Good examples are our health efforts: Life Sciences (that works on the glucose-sensing contact lens), and Calico (focused on longevity).

Google has long dabbled in medicine, but Page’s announcement signaled that he wants biomedical research to be more than just a side project for his newly christened company. Behind the scenes, efforts were already well under way to transform Google into a place that was serious about life sciences.

Under Alphabet, life sciences will become its own independent division, though it doesn’t have an official name just yet. (The company says to expect more news soon.) But a few hints suggest the life sciences group had been operating fairly independently already. Last month, CFO Ruth Porat singled out life sciences during a quarterly earnings call as one of the areas Google sees as “longer-term sources of revenue.” To get there, the company has been quietly recruiting top scientific talent, from immunologists to neurologists to nanoparticle engineers.

Google Life Sciences is focused on shifting health care from a reactive, undifferentiated approach to a proactive, targeted approach,” reads one of the company’s recent job listings. Biomedical researchers at Google will work to transform the “detection, prevention, management and even our basic understanding of disease,” the company says. In other words, just like everything else it does, the company once known as Google intends to train its outsized ambition on fixing the most basic problems afflicting human health.

Building An Infrastructure
For the past two years, Google’s life science efforts have been headed up by Andrew Conrad, previously the chief scientific officer at LabCorp and the co-founder of the National Genetics Institute. He leads more than 150 scientists who come from fields as wide-ranging as astrophysics, theoretical math, and oncology. “Our central thesis was that there’s clearly something amiss in Western medicine,” Conrad told Steven Levy of Backchannel back in October.

Sam Gambhir, a professor of radiology, bioengineering, and materials science at Stanford University who has collaborated with Conrad since before Google Life Sciences was a formal division within Google X, says the division isn’t just playing around. Gambhir says projects on which he’s partnered with Google’s life sciences team include the use of nanotechnology to improve diagnostics as well as devices to continuously monitor biomarkers.

They’re systematically building an infrastructure to tackle things in-house as well as collaborate with multiple universities,” Gambhir tells WIRED. “It’s a very serious effort, and it seems to have always been supported from the very top of the company.

Tackling Chronic Disease
One of the longest-standing efforts has been a project to develop new ways of diagnosing and treating diabetes. Last year Google unveiled a smart contact lens diabetics can use to read blood sugar levels through the tears in their eyes. Pharmaceutical giant Novartis announced that it would license the smart lens tech from Google, and the two companies are exploring other uses for the tech. Just this month, Google announced it was partnering with Dexcom, a glucose-monitoring company, to focus on making a continuous glucose monitor that’s cheaper, more convenient than current solutions, and disposable, the company said.

Google is also diving deep into genomics. Gambhir says a committee of scientists from Google, Duke University, and Stanford University have been meeting multiple times a week for about a year now to work on the design of what Google has called its Baseline Study, a project that will ultimately collect anonymous genetic information from 10,000 people to create a “baseline” picture of what a healthy human being looks like on a molecular level. Gambhir, a collaborator on the project, says Baseline is intended to be a “longitudinal study on human health to understand the transition from health to disease.

Other work on the molecular level include a cancer-detecting pill that pairs with a wristband, all part of what Google called its “nanoparticle platform.” Part of getting the wearable to work correctly included understanding how light passed through skin, which led Conrad and his team to make artificial human skin. Life Sciences is looking at other chronic diseases, too. In January, Conrad told Bloomberg that the team planned to partner with multiple sclerosis drugmaker Biogen to study environmental and biological contributors to the disease’s progression.

Ageless Problems
Last September, Google bought Lift Labs, maker of Liftware—a high-tech spoon designed to help people with neurodegenerative tremors eat. But Google wouldn’t be Google (er, Alphabet wouldn’t be Alphabet) if it was just concerned with addressing the symptoms of disease. Aging itself is another problem it hopes to disrupt. Calico, which is organizationally separate from the life sciences group, aims to maximize the human lifespan by preventing aging. The life sciences division, meanwhile, is focused on staving off diseases that could interfere with Calico’s goal. Neither of those efforts seems very closely tied to Google’s original business model of targeting ads to users based on Internet searches. Now that life sciences have become independent under Alphabet, it looks like they don’t have to be.


miércoles, 19 de agosto de 2015

First almost fully-formed human brain grown in lab, researchers claim

Research team say tiny brain could be used to test drugs and study diseases, but scientific peers urge caution as data on breakthrough kept under wraps

The tiny brain, which resembles that of
a five-week-old foetus, is not conscious.
Photograph: Ohio State University

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. It could also be used to test drugs for conditions such as Alzheimer’s and Parkinson’s, since the regions they affect are in place during an early stage of brain development.

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed, claimed Rene Anand of Ohio State University, Columbus, who presented the work today at the Military Health System Research Symposium in Fort Lauderdale, Florida.
Scientists create lab-grown spinal cords. Read more

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

Anand and his colleagues claim to have reproduced 99% of the brain’s diverse cell types and genes. They say their brain also contains a spinal cord, signalling circuitry and even a retina.

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

Anand claims to have created the brain by converting adult skin cells into pluripotent cells: stem cells that can be programmed to become any tissue in the body. These were then grown in a specialised environment that persuaded the stem cells to grow into all the different components of the brain and central nervous system.

According to Anand, it takes about 12 weeks to create a brain that resembles the maturity of a five-week-old foetus. To go further would require a network of blood vessels that the team cannot yet produce. “We’d need an artificial heart to help the brain grow further in development,” said Anand.

Several researchers contacted by the Guardian said it was hard to judge the quality of the work without access to more data, which Anand is keeping under wraps due to a pending patent on the technique. Many were uncomfortable that the team had released information to the press without the science having gone through peer review.

Zameel Cader, a consultant neurologist at the John Radcliffe Hospital, Oxford, said that while the work sounds very exciting, it’s not yet possible to judge its impact. “When someone makes such an extraordinary claim as this, you have to be cautious until they are willing to reveal their data.
3D-printed brain tissue. Read more
If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

You could also test the effect of different environmental toxins on the growing brain, he added. “We can look at the expression of every gene in the human genome at every step of the development process and see how they change with different toxins. Maybe then we’ll be able to say ‘holy cow, this one isn’t good for you.’

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

ORIGINAL: The Guardian
Tuesday 18 August 2015 

viernes, 14 de agosto de 2015

Modified yeast produce opiates from sugar

Yeast growing happily in a petri dish. Image via Stephanie Galanie. Shutterstock
Move over, poppies. In one of the most elaborate feats of synthetic bio logy to date, a research team has engineered yeast with a medley of plant, bacterial, and rodent genes to turn sugar into thebaine, the key opiate precursor to morphine and other powerful painkilling drugs that have been harvested for thousands of years from poppy plants. The team also showed that with further tweaks, the yeast could make hydrocodone, a widely used painkiller that is now made chemically from thebaine.

This is a major milestone,” says Jens Nielsen, a synthetic biologist at Chalmers University of Technology in Göteborg, Sweden. The work, he adds, demonstrates synthetic biology's increasing sophistication at transferring complex metabolic pathways into microbes.

By tweaking the yeast pathways, medicinal chemists may be able to produce more effective, less addictive versions of opiate painkillers. But some biopolicy experts worry that morphinemaking yeast strains could also allow illicit drugmakers to brew heroin as easily as beer enthusiasts home brew today—the drug is a simple chemical conversion from morphine. That concern is one reason the research team, led by Christina Smolke, a synthetic biologist at Stanford University in Palo Alto, California, stopped short of making a yeast strain with the complete morphine pathway; medicinal drug makers also primarily use thebaine to make new compounds.

Synthetic biologists had previously engineered yeast to produce artemisinin, an antimalarial compound, but that required inserting just a handful of plant genes. To get yeast to make thebaine, Smolke's team coaxed the cells to express 21 genes in total, including many added from a diverse set of species (seegraphic); making hydrocodone took 23 genes.

Their success, reported online this week in Science, caps a race to install the complex opioid pathway in yeast. Last year, Smolke's team reported engineering yeast to carry out the tail end of the process, going from thebaine to morphine. In April, Vincent Martin, a microbiologist at Concordia University in Montreal, Canada, and his colleagues said they had created yeast that could go from an earlier intermediate compound called R-reticuline to morphine. A few weeks later, John Dueber, a synthetic biologist at the University of California, Berkeley, and colleagues announced yeast that carries out most of the first half of the pathway, going from glucose to another intermediate compound, S-reticuline. Finally, two groups reported in late June that they had identified the long-sought enzyme needed to carry out the chemical transformation in the middle, S-reticuline to R-reticuline.

Even so, many predicted it would take years to put all the pieces together. As it turns out, back in May, Smolke and her colleagues had already largely finished the task. “It shows this field is really moving fast,” says Kenneth Oye, a biotechnology policy expert at the Massachusetts Institute of Technology in Cambridge.

The most important challenge, Smolke says, was increasing the efficiency of each step so losses wouldn't build up. In one step, for example, a plant enzyme called SalSyn was doing a poor job of converting R-reticuline to another compound called salutaridine. Eventually, Smolke's team discovered that the yeast made the enzyme incorrectly, attaching the wrong sugars to it. The researchers fixed the problem by reengineering the inserted plant gene.

Smolke plans to go on tinkering. The microbes need to increase output of thebaine by a factor of 100,000 for drug companies to be interested in using them to make medicines. That won't be easy. But Martin notes that researchers boosted the output of the artemisinin making yeast by a similar amount. “It will happen,” he says. “The only question is how fast.” Smolke recently formed a company called Antheia, based in Palo Alto, that aims to push that pace.

To keep up with the yeast engineers, Oye says policy experts need to develop rules to limit the risk of unintended uses of engineered microbes. In the case of opiatemaking yeast, such rules might forbid developing strains to produce illicit drugs, such as heroin, and require scientists to build in genes that prevent the microbes from living outside of a controlled laboratory environment.

Not everyone is worried about home-brewed opiates. Andrew Ellington, a synthetic biologist at the University of Texas, Austin, calls such fears “overblown.” The idea that producing vanishingly small quantities of opiates through fermentation is somehow going dwarf the problem of illegal drugs made from poppies is “laughable,” he says. But Martin disagrees. “Poppy fields are not readily available to someone in Chicago, whereas yeast can be made available to anyone.

The editors suggest the following Related Resources on Science sites
In Science Magazine
Stephanie Galanie, 
Kate Thodey, 
Isis J. Trenchard, 
Maria Filsinger Interrante, 
and Christina D. SmolkeScience aac9373
Published online 13 August 2015

Stephanie Galanie et al.Science aac9373
August 2015

Science 14 August 2015: 
Vol. 349 no. 6249 p. 677 
DOI: 10.1126/science.349.6249.677

jueves, 6 de agosto de 2015

Making polymers from a greenhouse gas

A future where power plants feed their carbon dioxide directly into an adjacent production facility instead of spewing it up a chimney and into the atmosphere is definitely possible, because CO2 isn't just an undesirable greenhouse gas; it is also a good source of carbon for processes like polymer production. In the journal Angewandte Chemie, American scientists have now introduced a two-step, one-pot conversion of CO2 and epoxides to polycarbonate block copolymers that contain both water-soluble and hydrophobic regions and can aggregate into nanoparticles or micelles.

CO2 and epoxides (highly reactive compounds with a three-membered ring made of two carbon atoms and one oxygen atom) can be polymerized to form polycarbonates in reactions that use special catalysts. These processes are a more environmentally friendly alternative to conventional production processes and have already been introduced by several companies. However, because current CO2-based polycarbonates are hydrophobic and have no functional groups, their applications are limited. In particular, biomedical applications, an area where the use of biocompatible polycarbonates is well established, have been left out.

A team led by Donald J. Darensbourg along with graduate student Yanyan Wang at Texas A&M University (USA) has provided a solution. For the first time, the researchers have been able to produce amphiphilic polycarbonate block copolymers in which both the hydrophilic and hydrophobic regions are based on CO2. They were also able to incorporate a variety of functional and charged groups into the polymers. Because it is very difficult to find building blocks to make hydrophilic polycarbonates, the researchers used a trick: they polymerized first and attached the water-soluble groups afterwards.

The entire process is even a "one-pot reaction": The researchers first produce the hydrophobic regions by polymerizing CO2 and propylene oxide (as the epoxide component). In the same vessel, they then change to a different building block, allyl glycidyl ether (AGE), an epoxide with a double bond in its side chain, and continue the polymerization. The AGE-containing polymer grows on both ends of the existing polycarbonate, leading to a triblock copolymer. The length of the blocks can be controlled precisely. Subsequently a "thiol–ene click reaction" can be used to simply "click" a water-soluble group into place at the double bond. This makes it possible to attach acidic and/or basic groups that carry a positive or negative charge in certain pH ranges. Some of the amphiphilic polycarbonates made by this method are able to aggregate into particles or micelles in a self-organization process. This, and the ability to attach bioactive substances, for example, could provide many more possibilities for biomedical applications.
Explore further:  
Bristly Spheres as Capsules

More information: "Construction of Versatile and Functional Nanostructures Derived from CO2-based Polycarbonates." Angew. Chem. Int. Ed.. doi: 10.1002/anie.201505076

Related Stories 

Bristly Spheres as Capsules March 6, 2009
( -- Amphiphilic molecules, which have one water-friendly (hydrophilic) end and one water-repellant (hydrophobic) end, spontaneously aggregate in aqueous solutions to make superstructures like capsules or bilayers. ...
Copolymerization of metal nanoparticles for the production of colloidal plasmonic copolymers February 24, 2014

Molecules can copolymerize to form longer composite chains; it turns out that nanoparticles called colloidal particles can also copolymerize to make hybrid nanostructures. The fact that these reactions occur in a very similar ...
Polymers that can be fine-tuned for optimal effect could help fight multidrug-resistant infections July 2, 2014

The rise of drug-resistant microbes is a major challenge facing medicine. The World Health Organization's 2014 report on global surveillance of antimicrobial resistance warns of the very real possibility of the twenty-first ...
Sulfur fluoride exchange—a powerful new reaction for click chemistry September 1, 2014
( —The coupling of molecular building blocks nearly as easy as "snapping" them together can be realized by means of the "click chemistry" tool kit. American scientists have now introduced another achievement for ...
New silicon peptide biopolymers February 6, 2015
Copolymers made from synthetic and biomimetic components open new and interesting perspectives as biocompatible, biodegradable materials that can also be given biological functionality. In the journal Angewandte Chemie, French ... 

Catalyst that converts carbon dioxide to carbon monoxide in water June 3, 2015
(—Clean energy, or energy that comes from renewable sources, is of interest in the developing world. One path toward clean energy is harnessing solar energy and converting it into electrical energy, which could ...

Recommended for you 

Understanding the firefly's glow August 4, 2015
Now we know how fireflies get their glow going. 
Graphene drives potential for the next-generation of fuel-efficient cars August 4, 2015
Harvesting heat produced by a car's engine which would otherwise be wasted and using it to recharge the car's batteries or powering the air-conditioning system could be a significant feature in the next generation of hybrid ... 

Materials scientists take big step toward tougher ductile ceramics August 4, 2015
A team of materials scientists at the UCLA Henry Samueli School of Engineering and Applied Science is exploring ways to create tough ceramics, a long sought-after class of materials that would be exceptionally hard, capable ... 

Scientists devise method for rescuing genetic material from formaldehyde-treated tissue samples August 4, 2015
Each year, millions of tissue samples are collected from cancer patients and preserved in formaldehyde. The chemical "freezes" the cancer cells within the sample, allowing physicians to look at the disease and plan a specific ... 

Sol-gel capacitor dielectric offers record-high energy storage July 30, 2015
Using a hybrid silica sol-gel material and self-assembled monolayers of a common fatty acid, researchers have developed a new capacitor dielectric material that provides an electrical energy storage capacity rivaling certain ...
Origins of life: New model may explain emergence of self-replication on early Earth July 28, 2015
When life on Earth began nearly 4 billion years ago, long before humans, dinosaurs or even the earliest single-celled forms of life roamed, it may have started as a hiccup rather than a roar: small, simple molecular building ...

July 28, 2015

Giant Tortoises Island Hop Across the Galápagos

Credit Galapagos Conservancy
Giant Galápagos tortoises, the world’s biggest, have had it rough. Thanks to pirates and whalers eating them and to non-native species like goats destroying their habitat, four of the 14 documented species are extinct. Most recently, the Pinta species vanished with the 2012 death of Lonesome George, after decades of attempts to get him to reproduce.

But the tortoises emerging from the crates above represent a milestone in tortoise restoration efforts. They are among 201 tortoises recently released onto Santa Fe Island, which lost its tortoise species a century and a half ago.

We wanted to do this for a long time,” said Linda Cayot, the science adviser for the Galápagos Conservancy, which, in collaboration with the Galapagos National Park Directorate, runs the Giant Tortoise Restoration Initiative. It wasn’t easy. Without any Santa Fe tortoises left (nobody alive now has actually seen them – their existence is known mainly from whalers’ logbooks and museum-preserved bone fragments), conservationists turned to a close genetic relative: tortoises from Española Island.

Española is itself a tortoise success story. By the 1960s, the island was so sparsely populated that its 12 females and two males never even crossed paths to mate. Brought to a breeding center on another island, they were joined by a male from the San Diego Zoo, who some naturalists nicknamed “Diego” and who “became the major stud,” Dr. Cayot said. The other two males stepped up too.

The tortoise eggs were incubated, at temperatures adjusted to hatch two females for every male (slightly warmer eggs produce females). At about age 4, able to withstand predators, babies were placed on Española, which now has about 1,000 tortoises, Dr. Cayot said. Santa Fe was next.
Before dawn on June 27, 201 Española tortoises between 4 and 10 years old were ferried there and carried, up to 12 in a backpack, on a long rocky trail to Santa Fe’s interior. The 30 oldest, including two pictured above, have radio transmitters glued to their carapaces.

Periodically, conservationists will find those tortoises to study their movement and effects on vegetation, Dr. Cayot said, noting that about half of repatriated tortoises die because of scarcity of food and water. Those who find what they need are likely to live a century or more.

By The New York Times 
August 6, 2015