martes, 30 de abril de 2013

Talking about the Computational Future at SXSW 2013

March 19, 2013

Last week I gave a talk at SXSW 2013 in Austin about some of the things I’m thinking about these days—including quite a few that I’ve never talked publicly about before. Here’s a video, and a slightly edited transcript:




Well, this is a pretty exciting time for me. Because it turns out that a whole bunch of things that I’ve been working on for more than 30 years are all finally converging, in a very nice way. And what I’d like to do here today is tell you a bit about that, and about some things I’ve figured out recently—and about what it all means for our future.

This is going to be a bit of a wild talk in some ways. It’s going to go from pretty intellectual stuff about basic science and so on, to some really practical technology developments, with a few sneak peeks at things I’ve never shown before.

Let’s start from some science. And you know, a lot of what I’ll say today connects back to what I thought at first was a small discovery that I made about 30 years ago. Let me tell you the story.

I started out at a pretty young age as a physicist. Diligently doing physics pretty much the way it had been done for 300 years. Starting from this-or-that equation, and then doing the math to figure out predictions from it. That worked pretty well in some cases. But there were too many cases where it just didn’t work. So I got to wondering whether there might be some alternative; a different approach.

At the time I’d been using computers as practical tools for quite a while—and I’d even created a big software system that was a forerunner of Mathematica. And what I gradually began to think was that actually computers—and computation—weren’t just useful tools; they were actually the main event. And that one could use them to generalize how one does science: to think not just in terms of math and equations, but in terms of arbitrary computations and programs.

So, OK, what kind of programs might nature use? Given how complicated the things we see in nature are, we might think the programs it’s running must be really complicated. Maybe thousands or millions of lines of code. Like programs we write to do things.

But I thought: let’s start simple. Let’s find out what happens with tiny programs—maybe a line or two of code long. And let’s find out what those do. So I decided to do an experiment. Just set up programs like that, and run them. Here’s one of the ones I started with. It’s called a cellular automaton. It consists of a line of cells, each one either black or not. And it runs down the page computing the new color of each cell using the little rule at the bottom there.


OK, so there’s a simple program, and it does something simple. But let’s point our computational telescope out into the computational universe and just look at all simple programs that work like the one here.


Well, we see a bunch of things going on. Often pretty simple. A repeating pattern. Sometimes a fractal. But you don’t have to go far before you see much stranger stuff.

This is a program I call “rule 30“. What’s it doing? Let’s run it a little longer.


That’s pretty complicated. And if we just saw this somewhere out there, we’d probably figure it was pretty hard to make. But actually, it all comes just from that tiny program at the bottom. That’s it. And when I first saw this, it was my sort of little modern “Galileo moment”. I’d seen something through my computational telescope that eventually made me change my whole world view. And made me realize that computation—even as done by a tiny program like the one here—is vastly more powerful and important than I’d ever imagined.


Well, I’ve spent the past few decades working through the consequences of this. And it’s led me to build a new kind of science, to create all sorts of practical technology, and to make me think about almost everything in a different way. I published a big book about the science about ten years ago. And at the time when the book came out, there was a quite a bit of “paradigm shift turbulence“. But looking back it’s really nice to see how well the science has taken root.



And for example there are models based on my kinds of simple programs showing up everywhere. After 300 years of being dominated by Newton-style equations and math, the frontiers are definitely now going to simple programs and the new kind of science.

But there’s still one ultimate app out there to be done: to figure out the fundamental theory of physics—to figure out how our whole universe works. It’s kind of tantalizing. We see these very simple programs, with very complex behavior.


It makes one think that maybe there’s a simple program for our whole universe. And that even though physics seems to involve more and more complicated equations, that somewhere underneath it all there might just be a tiny little program. We don’t know if things work that way. But if out there in the computational universe of possible programs, the program for our universe is just sitting there waiting to be found, it seems embarrassing not to be looking for it.

Now if there is indeed a simple program for our universe, it’s sort of inevitable that it has to operate kind of underneath our standard notions like space and time and so on. Maybe it’s a little like this.


A giant network of nodes, that make up space a bit like molecules make up the air in this room. Well, you can start just trying possible programs that create such things. Each one is in a sense a candidate universe.


And when you do this, you can pretty quickly say most of them can’t be our universe. Time stops after an instant. There are an infinite number of dimensions. There can’t be particles or matter. Or other pathologies.

But what surprised me is that you don’t have to go very far in this universe of possible universes before you start finding ones that are very plausible. And that for example seem like they’ll show the standard laws of gravity, and even some features of quantum mechanics. At some level it turns out to be irreducibly hard to work out what some of these candidate universes will do. But it’s quite possible that already caught in our net is the actual program for our universe. The whole thing. All of reality.

Well, if you’d asked me a few years ago what I thought I’d be doing now, I’d probably have said “hunting for our universe”. But fortunately or unfortunately, I got seriously sidetracked. Because I realized that once one starts to understand the idea of computation, there’s just an incredible amount of technology one can build—that’s to me quite fascinating, and that I think is also pretty important for the world. And in fact, right off the bat, there’s a whole new methodology one can use for creating technology.

lunes, 29 de abril de 2013

Scripps Research Institute Scientists Discover How a Protein Finds Its Way


Katrin Karbstein.
Photo: TSRI
JUPITER, FL, April 29, 2013 – Proteins, the workhorses of the body, can have more than one function, but they often need to be very specific in their action or they create cellular havoc, possibly leading to disease.

Scientists from the Florida campus of The Scripps Research Institute (TSRI) have uncovered how an enzyme co-factor can bestow specificity on a class of proteins with otherwise nonspecific biochemical activity.

The protein in question helps in the assembly of ribosomes, large macromolecular machines that are critical to protein production and cell growth. This new discovery expands scientists’ view of the role of co-factors and suggests such co-factors could be used to modify the activity of related proteins and their role in disease.

In ribosome production, you need to do things very specifically,” said TSRI Associate Professor Katrin Karbstein, who led the study.Adding a co-factor like Rrp5 forces these enzymes to be specific in their actions. The obvious possibility is that if you could manipulate the co-factor, you could alter protein activity, which could prove to be tremendously important.

The new study, which is being published the week of April 29, 2013, in the online Early Edition of the Proceedings of the National Academy of Science, sheds light on proteins called DEAD-box proteins, a provocative title actually derived from their amino acid sequence. These proteins regulate all aspects of gene expression and RNA metabolism, particularly in the production of ribosomes, and are involved in cell metabolism. The link between defects in ribosome assembly and cancer and between DEAD-box proteins and cancer is well documented.

The findings show that the DEAD-box protein Rok1, needed in the production of a small ribosomal subunit, recognizes the RNA backbone, the basic structural framework of nucleic acids. The co-factor Rrp5 then gives Rok1 the ability to target a specific RNA sequence by modulating the structure of Rok1.

Despite extensive efforts, the roles of these DEAD-box proteins in the assembly of the two ribosomal subunits remain largely unknown,” Karbstein said. “Our study suggests that the solution may be to identify their cofactors first.

The first author of the study, “Cofactor-Dependent Specificity of a DEAD-box Protein,” is Crystal L. Young. Also a co-author of the paper is Sohail Khoshnevis.

The study was supported by National Institutes of Health Grant R01-GM086451 and the American Heart Association.

About The Scripps Research Institute
The Scripps Research Institute (TSRI) is one of the world's largest independent, not-for-profit organizations focusing on research in the biomedical sciences. TSRI is internationally recognized for its contributions to science and health, including its role in laying the foundation for new treatments for cancer, rheumatoid arthritis, hemophilia, and other diseases. An institution that evolved from the Scripps Metabolic Clinic founded by philanthropist Ellen Browning Scripps in 1924, the institute now employs about 3,000 people on its campuses in La Jolla, CA, and Jupiter, FL, where its renowned scientists—including three Nobel laureates—work toward their next discoveries. The institute's graduate program, which awards PhD degrees in biology and chemistry, ranks among the top ten of its kind in the nation. For more information, see www.scripps.edu.

# # #

For information:
Office of Communications 
Tel: 858-784-2666
Fax: 858-784-8136

No magic show: Real-world levitation to inspire better pharmaceuticals

BY JARED SAGOFF 
SEPTEMBER 12, 2012


It’s not a magic trick and it’s not sleight of hand – scientists really are using levitation to improve the drug development process, eventually yielding more effective pharmaceuticals with fewer side effects.

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have discovered a way to use sound waves to levitate individual droplets of solutions containing different pharmaceuticals. While the connection between levitation and drug development may not be immediately apparent, a special relationship emerges at the molecular level.

At the molecular level, pharmaceutical structures fall into one of two categories: amorphous or crystalline. Amorphous drugs typically are more efficiently taken up by the body than their crystalline cousins; this is because amorphous drugs are both more highly soluble and have a higher bioavailability, suggesting that a lower dose can produce the desired effect.

One of the biggest challenges when it comes to drug development is in reducing the amount of the drug needed to attain the therapeutic benefit, whatever it is,” said Argonne X-ray physicist Chris Benmore, who led the study.

Most drugs on the market are crystalline – they don’t get fully absorbed by the body and thus we aren’t getting the most efficient use out of them,” added Yash Vaishnav, Argonne Senior Manager for Intellectual Property Development and Commercialization.

Getting pharmaceuticals from solution into an amorphous state, however, is no easy task. If the solution evaporates while it is in contact with part of a vessel, it is far more likely to solidify in its crystalline form. “It’s almost as if these substances want to find a way to become crystalline,” Benmore said. 

In order to avoid this problem, Benmore needed to find a way to evaporate a solution without it touching anything. Because liquids conform to the shape of their containers, this was a nearly impossible requirement -- so difficult, in fact, that Benmore had to turn to an acoustic levitator, a piece of equipment originally developed for NASA to simulate microgravity conditions.

Levitation or “containerless processing” can form pristine samples that can be probed in situ with the high-energy X-ray beam at Argonne’s Advanced Photon Source. “This allows amorphization of the drug to be studied while it is being processed,” said Rick Weber, who works on the project team at the synchrotron.

The acoustic levitator uses two small speakers to generate sound waves at frequencies slightly above the audible range – roughly 22 kilohertz. When the top and bottom speakers are precisely aligned, they create two sets of sound waves that perfectly interfere with each other, setting up a phenomenon known as a standing wave.

At certain points along a standing wave, known as nodes, there is no net transfer of energy at all. Because the acoustic pressure from the sound waves is sufficient to cancel the effect of gravity, light objects are able to levitate when placed at the nodes.

Although only small quantities of a drug can currently be “amorphized” using this technique, it remains a powerful analytical tool for understanding the conditions that make for the best amorphous preparation, Vaishnav explained.

Argonne researchers have already investigated more than a dozen different pharmaceuticals, and the laboratory’s Technology Development & Commercialization Division is currently pursuing a patent for the method. Technology Development & Commercialization is also interested in partnering with the pharmaceutical industry to develop the technology further as well as to license it for commercial development.

After adapting the technology for drug research, the Argonne scientists teamed up with Professors Stephen Byrn and Lynne Taylor at the Department of Industrial and Physical Pharmacy at Purdue University and Jeffery Yarger of the Department of Chemistry and Biochemistry at Arizona State University. The group is now working on identifying which drugs the levitation instrumentation will impact most strongly.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy'sOffice of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

3D Human Liver Tissue Model

ORIGINAL: Organovo

OVERVIEW
Liver cells, in particular the parenchymal hepatocytes, are widely used in the laboratory to assess the potential toxicity or efficacy of drugs. Hepatocytes inside the body have a nearly unlimited capacity for replication. When as much as two-thirds of a whole healthy liver is surgically removed, the hepatocytes within the liver remnant undergo rapid and extensive proliferation to restore liver mass completely.1, 2, 3 

However, once removed from the body, hepatocytes replicate poorly and rapidly lose critical liver-specific functions. The liver is responsible for
  • filtering the blood
  • metabolizing and transporting drugs, and 
  • producing a myriad of proteins that are critical to homeostasis (albumin, clotting factors, enzymes involved in protein metabolism). 
Many genetic disorders are linked to reduction or absence of proteins that would normally be produced by the liver. Furthermore, the liver is central to the pathogenesis of several infectious diseases, including hepatitis, and it can also be seriously and irreversibly injured by chronic exposure to alcohol.

Most liver functions are dependent, in part, on architecture. Hepatocytes inside the body are polarized along a border of endothelial cells, with formation of canaliculi along their apical surface and tight junctions between neighboring cells. Loss of polarization—as occurs when hepatocytes are cultured in simple monolayers on standard tissue culture-treated plastic—leads to loss of function and an inability of the hepatocyte to maintain the intracellular architecture that enables absorption, transport, and bile production. It is known from the literature that hepatocytes which are maintained in culture environments that support polarization and three-dimensionality retain critical functions for a longer period outside of the body.4 

BIOPRINTED LIVER TISSUE MODEL
Organovo’s NovoGen Bioprinting platform was utilized to generate bioprinted liver tissue prototypes that contain both parenchymal and non-parenchymal cells in spatially controlled, user-defined geometries that reproduce compositional and architectural features of native tissue.

One advantage of our automated bioprinting platform is that it enables fabrication and comparative testing of multiple compositions and geometries so that winning combinations can be identified systematically based on histological and functional outcomes

Cross-section of multi-cellular bioprinted human liver tissue, stained with hematoxylin & eosin (H&E).
Beginning with hepatocytes (the predominant parenchymal cells of the liver), designs were created based on shapes and cellular interfaces found in native liver tissue. Non-parenchymal cells, including endothelial cells and hepatic stellate cells, were positioned in defined locations relative to hepatocytes, creating a compartmentalized architecture that was established at the time of fabrication and substantially maintained over time in culture.

This image is a cross-section of bioprinted human liver tissue demonstrating compartmentalization between the hepatocytes (shown as blue nuclei), endothelial cells (red), and hepatic stellate cells (green).
In addition to the cell type-specific compartmentalization, two histomorphological features can be appreciated in these bioprinted liver tissues: 
1. The development of microvascular networks within the tissue; and 
2. the formation of tight intercellular junctions among the hepatocytes.

The image above shows bioprinted human liver with CD31+ microvessels (green) forming within the tissue.
The image above shows formation of intercellular junctions between hepatocytes in bioprinted liver tissue, highlighted by E-Cadherin immunochemistry (green).
Importantly, these multi-cellular, 3D liver tissues possess critical attributes central to liver function, including production of liver-specific proteins such as albumin and transferrin, biosynthesis of cholesterol, and inducible cytochrome P450 activities, including CYP1A2 and CYP3A4. 

Production of the liver-specific protein, albumin, was 5 to 9 times greater on a per-cell basis when compared to matched 2D controls. These functional data, combined with the unique histological features of the tissues, suggest they may be a compelling alternative to traditional 2D hepatocyte cultures for predictive studies, especially those involving longer-term tissue toxicity assessments or studies of disease development and progression where results need to be interpreted in the context of cell-cell interactions.
CYP1A2 and CYP3A4 were measured with Pro-Glo ™ CYP450 assays (Promega), after induction with verapamil or dexamethasone, respectively. Measurements were taken at 135 hours after the 3D liver tissues were bioprinted, and reported as fold induction over matched, non-induced controls.
The overall goal of studies like these is to develop living, multi-cellular human tissues that can be maintained in the laboratory environment for extended periods of time and sampled serially for both functional and histological changes in response to injury, pathogens, or treatments.

REFERENCES
  1. Nagasue N, Yukaya H, Ogawa Y, Kohno H, Nakamura T. Ann Surg. 1987 Jul; 206(1):30-9.
  2. Marcos A, Fisher RA, Ham JM, Shiffman ML, Sanyal AJ, Luketic VA, Sterling RK, Fulcher AS, Posner MP. Transplantation. 2000 Apr 15; 69(7):1375-9.
  3. Yamanaka N, Okamoto E, Kawamura E, Kato T, Oriyama T, Fujimoto J, Furukawa K, Tanaka T, Tomoda F, Tanaka W. Hepatology. 1993 Jul; 18(1):79-85.
  4. J Pharm Sci. <http://www.ncbi.nlm.nih.gov/pubmed/20533556#> 2011 Jan; 100(1):59-74. doi: 10.1002/jps.22257. Epub 2010 Jun 8.

OneStart Top 10 Announced!

ORIGINAL: Oxbridge Biotech
JohnDaley
21st April 2013

OBR and SR-One are pleased to announce the Top 10 teams in the OneStart Competition. After extensive work with their industry mentors, all 35 semi-finalist teams developed comprehensive pitches and business plans. The judges, after poring over the submissions, have chosen only 10 to move on to the final round. These teams will present their ideas at the Finals Gala on May 16th, where one will walk away with the £100K prize!



1) anywhereHPLC
anywhereHPLC is a novel analytical platform developed by researchers at the Institute of Chemical Biology at Imperial College in London, allowing precise and reproducible measurement of small-molecule metabolites at the point of sampling – in the field or the home. anywhereHPLC will provide rapid diagnoses and disease monitoring in many environments where no alternative tool exists, ranging from applications in the developing world to the patient-led supervision of chronic diseases in Europe and the USA. anywhereHPLC can deliver instant results when and where they are needed, providing the diagnostic power of gold-standard laboratory equipment from an integrated, disposable unit with a foot-print smaller than a paperback book.

2) BioAmp
The cost of healthcare is rising unsustainably across the globe. Remote health monitoring has exceptional potential to both reduce the financial burden of healthcare, reducing the number of patient visits to medical centres, and improve treatment outcomes through closer monitoring. BioAmp Diagnostics is a startup exploiting highly innovative new technology developed by the founders at the University of Cambridge. Our patent filed technology has significant advantage for use in remote health monitoring over current diagnostic technologies in its small form factor, extremely low power, multiplexing potential (hence adding accuracy to clinical decisions) and integrability with existing electronic infrastructure such as smartphones for efficient data transfer. We are initially targeting lucrative segments within the chronic and acute cardiac disease market, where our technology aids clinical decisions and adds significant value to healthcare providers. The BioAmp team combines expertise in electronic engineering, nanotechnology, informatics, biology and business development. Our complimentary backgrounds form an excellent foundation to propel this powerful new technology forward to market.

3) FoetoH
FoetoH is a novel portable technology for pregnant mothers to monitor the health of their babies themselves, at hospital level accuracy, in the comfort of their own homes. Mothers would use the device by strapping on FoetoH’s sensor belt and launching its smartphone application. FoetoH acquires and analyses foetal heart rate signals according to clinical guidelines using a combination of wireless ultrasound and sophisticated signal processing. It then presents mothers with information on the health of their baby in real time in an easy to understand colour coded manner. FoetoH also provides the mother with information on what to do next, and allows her to transfer the results to her physician. The product aims to provide reassurance to pregnant mothers in moments of uncertainty and empower them to be more actively involved in promoting the health of their babies, while simultaneously contributing to preventing stillbirths and neonatal deaths – 11 of these occur globally each minute. The company, F-GHS, is headquartered in London, UK. The current management consists of co-founders and Oxford PhDs, Michelle Fernandes and Ricardo Pachόn. Michelle is a clinician with expertise in foetal development, and antenatal monitoring. Ricardo is a mathematician with 10+ years of experience in pattern recognition automation. The management is supported by three advisory committees, consisting of medical, technical and business experts.

4) Hackett Biologics
Hackett Biologics was formally founded in 2009 and has primarily focused on designing and developing novel materials including injectables, patches, conduits and scaffolds. All research and therapies involving stem cells rely on a ‘cold chain’ for storage and distribution, thus introducing loss of efficacy, viability, chemical contaminants, logistical issues and costs. Hackett Biologics has developed a proprietary technology that stabilises stem cells at room temperature, removing the need for freezing; as such a significant market opportunity exists for the technology. Dr. Joanne M. Hackett, CSO and Founder, has a background in regenerative medicine and tissue engineering. She has been an assistant professor of regenerative medicine at Linköping University in Sweden, a visiting scientist/strategy lead at Pfizer Neusentis and Director of Business Development for Bodymetrics. She is currently the strategic & commercial partnerships executive at the Royal Society of Chemistry. Dr. Ryan B. MacDonald, Project Manager/scientific advisor has been critically involved in Hackett Biologics since its inception in 2009. He has a background in stem cell biology, principally studying the differentiation and proliferation of neural stem cells. He is currently a Herchel Smith Research Fellow at the University of Cambridge, a research associate at Clare College and an Associate Fellow of the Higher Education Academy.

5) LipoPep
LipoPep is a targeted drug delivery system that allows for existing therapeutics to be actively and selectively delivered to the placenta, providing the first treatment option for pregnancy complications. Placental abnormalities affect approximately 10% of pregnancies worldwide and cost in excess of $(US) 40 billion per year. Such abnormalities lead to pre-eclampsia and fetal growth restriction, which result in an increased risk of stillbirth, premature birth, neonatal death and poor health in adulthood. Currently there is no therapeutic intervention for these complications, with the only treatment option being premature delivery. LipoPep is being taken through pre-clinical development by Dr Lynda Harris, a BBSRC David Phillips Research Fellow, and Anna King and Natalie Cureton who both undertaking PhDs in nanoscience. Development is fully supported by the School of Pharmacy and the Maternal and Fetal Health Research Centre in St Mary’s Hospital, at The University of Manchester.

6) MPDx
MPDx Technologies, founded in 2012 and based in Cambridge, UK, is developing portable, automated sample processing technology as an enabler for truly integrated sample-to-result diagnostics. MPDx´s innovative, yet proven, robust sample extraction platform will address the urgent unmet clinical need for early diagnosis within high value and rapidly growing infectious disease diagnostic markets, to deliver step-change benefits to consumers and have a profound impact on patient lives. MPDx´s passionate, interdisciplinary team of technical, business and finance professionals comprises a wealth of complementary industry-specific expertise to ensure success of the business.

7) Picoto
The cornerstone of healthy living is a healthy beginning. Yet 80% of the globe does not have this guarantee — vulnerable babies begin their lives everyday. Picoto is our solution for newborns worldwide. It is a complete, networked system of intelligent sensors and algorithms that monitor neonatal health parameters — and empower actionable healthcare delivery to provide a neo-standard of care. Picoto is being developed by a group of dedicated, multidisciplinary researchers at the University of Oxford.

8) Puridify
Puridify ltd. is a London based spinout company from the Department of Biochemical Engineering, University College London. Our flagship product, FibroSelect, is a new chromatography reagent structure that utilises existing purification chemistries in a more efficient manner. The result is a significant reduction in biotherapeutic manufacturing costs (c.25%) addressing the global demand for cheaper drugs and widening patient access to new and existing therapeutics.

9) Pym
James Flewellen and Irwin Zaid are postdoctoral research fellows in the Department of Physics at the University of Oxford. We aim to develop a mobile diagnostic platform based on a novel form of microscopy. Most microscopic diagnoses are performed by highly-trained staff in dedicated medical laboratories looking at samples through a microscope. Our idea will automate much of this analysis and allow a health practitioner anywhere to perform a fast and cost-effective optical diagnosis of a fluid sample. Our plan involves developing a portable imaging device that attaches to your smartphone. The smartphone camera is used to form images of a fluid sample, which are uploaded to a cloud server for processing and analysis. Unlike conventional microscopy, we are able to capture 3D position and shape data of particles suspended in the sample. This data is then analysed by server-side software returning a diagnosis to the practitioner. Our first application is urinary tract infections, which are currently diagnosed by seeing bacteria in a urine sample – something our software can detect automatically.

10) Tecrea
Stem cell therapy offers a promising potential to cure many clinical conditions such as spinal cord injury, stroke, amyotrophic lateral sclerosis, macular degeneration, Parkinson’s disease, Type I Diabetes etc. Any cell can be re-programmed via the introduction of genes, proteins and small molecules to provide induced pluripotent stem cells (iPSC), which are of huge interest for therapy and are also valuable in drug discovery research and toxicology. Cell reprograming, however, is greatly limited by poor gene or protein delivery efficiencies. Current delivery strategies are too toxic and difficult to translate into the clinic for therapeutic applications.Tecrea-stem has a novel nanotechnology-based delivery platform for biomolecules (DNA, RNA, protein and small molecule). The platform is clinically safe and protected by a pending patent. This technology utilises an existing polymer that has been used safely for 50 years. Our discovery shows that this polymer forms nanoparticles with a wide range of cargo molecules and delivers them efficiently into eukaryotic cells with minimal stress. We have extensive lab-based evidence demonstrating how our technology works. We propose that our nano-polymer can be optimised for efficient stem cell reprogramming, which will facilitate the advancement of stem cell research and therapy.


This post was written by:JohnDaley

MIT's 2013 Top 10 Breakthrough Technologies - 3: Big Data from Cheap Phones

ORIGINAL: Tech Review
By David Talbot
April 23, 2013

Collecting and analyzing information from simple cell phones can provide surprising insights into how people move about and behave—and even help us understand the spread of diseases.



WHY IT MATTERS
Poor countries lack data-gathering infrastructure; phone data can provide it.

Breakthrough
Creating disease–fighting tools with cell-phone mobility data.

Key Players
• Caroline Buckee, Harvard University
• William Hoffman, World Economic Forum
• Alex Pentland, MIT
• Andy Tatem, University of Southampton

At a computer in her office at the Harvard School of Public Health in Boston, epidemiologist Caroline Buckee points to a dot on a map of Kenya’s western highlands, representing one of the nation’s thousands of cell-phone towers. In the fight against malaria, Buckee explains, the data transmitted from this tower near the town of Kericho has been epidemiological gold.

When she and her colleagues studied the data, she found that people making calls or sending text messages originating at the Kericho tower were making 16 times more trips away from the area than the regional average. What’s more, they were three times more likely to visit a region northeast of Lake Victoria that records from the health ministry identified as a malaria hot spot. The tower’s signal radius thus covered a significant waypoint for transmission of malaria, which can jump from human to human via mosquitoes. Satellite images revealed the likely culprit: a busy tea plantation that was probably full of migrant workers. The implication was clear, Buckee says. “There will be a ton of infected [people] there.

Caroline Buckee
This work is now feeding into a new set of predictive models she is building. They show, for example, that even though malaria cases were seen at the tea plantation, taking steps to control malaria there would have less effect on the disease’s spread than concentrating those efforts at the source: Lake Victoria. That region has long been understood as a major center of malaria, but what hasn’t been available before is detailed information about the patterns of human travel there: how many people are coming and going, when they’re arriving and departing, which specific places they’re coming to, and which of those destinations attract the most people traveling on to new places.

Caroline Buckee, a Harvard epidemiologist, is using detailed data on population movements—gleaned from mobile phones— to build precise new tools for fighting the spread of malaria.

Existing efforts to gather that kind of travel data are spotty at best; sometimes public-health workers literally count people at transportation hubs, Buckee says, or nurses in far-flung clinics ask newly diagnosed malaria victims where they’ve been recently. “At many border crossings in Africa, they keep little slips of paper—but the slips get lost, and nobody keeps track,” she says. “We have abstractions and general models on travel patterns but haven’t been able to do this properly—ever.

The data mining will help inform the design of new measures that are likely to include cheap, targeted campaigns of text messages—for example, warning visitors entering the Kericho tower’s signal zone to use bed netting. And it will help officials choose where to focus mosquito control efforts in the malarial areas. “You don’t want to be spraying every puddle for mosquito larvae all the time. But if you know there is a ton of importation from a certain spot, you want to increase your control program at that spot,” Buckee says. “And now I can pinpoint where the importation of a disease is especially important.

Buckee’s most recent study, published last year in Science and based on records from 15 million Kenyan phones, is a result of a collaboration with her husband, Nathan Eagle, who has been working to make sense of cell-phone data for more than a decade. In the mid-2000s, after getting attention for his work mining data from the phones of volunteers at MIT, Eagle started to get calls from mobile carriers asking for insight into questions like why customers canceled their phone plans. Eagle began working with them. And when the couple spent 18 months in Africa starting in 2006—Buckee was doing work on the genetics of the malaria parasite—he studied call data for various purposes, trying to understand phenomena like ethnic divisions in Nairobi slums and the spread of cholera in Rwanda. Buckee’s results show what might be possible when the technology is turned on public-­health problems. “This demonstrated ‘Yeah, we can really provide not just insight, but actually something that is actionable,’” says Eagle, now CEO of Jana, which runs mobile-phone surveys in the developing world. “This really does work.”

This is the future of epidemiology. If we are to eradicate malaria, this is how we will do it.

That demonstration suggests how such data might be harnessed to build tools that health-care workers, governments, and others can use to detect and monitor epidemics, manage disasters, and optimize transportation systems. Already, similar efforts are being directed toward goals as varied as understanding commuting patterns around Paris and managing festival crowds in Belgium. But mining phone records could be particularly useful in poor regions, where there’s often little or no other data-­gathering infrastructure. “We are just at the start of using this data for these purposes,” says ­Vincent Blondel, a professor of applied mathematics at the University of Louvain in Belgium and a leading researcher on data gleaned from cell phones. “The exponential adoption of mobile phones in low-income settings—and the new willingness of some carriers to release data—will lead to new technological tools that could change everything.

Blank Slate
The world’s six billion mobile phones generate huge amounts of data—including location tracking and information on commercial activity, search history, and links in social networks. Innumerable efforts to mine the data in different ways are under way in research and business organizations around the world. And of those six billion phones, five billion are in developing countries. Many of them are cheap phones that can do little besides make calls and send text messages. But all such activity can be tracked back to cell-phone towers, providing a rough way to trace a person’s movements. Throw in the spread of mobile payment technology for simple commerce and you have the raw material for insights not only into epidemiology but into employment trends, social tensions, poverty, transportation, and economic activity.

This map, a product of cell-phone data analytics, shows the most important sources of malaria infections (darker shades)—taking into account the potential for further transmission caused by human travel—as well as the major destinations of people exposed to the disease (lighter shades). It can be used to determine where best to focus warnings and mosquito control techniques.


MIT's 2013 Top 10 Breakthrough Technologies - 2: Ultra-Efficient Solar Power

ORIGINAL: Tech Review
April 23, 2013

Doubling the efficiency of solar devices would completely change the economics of renewable energy. Here is a design that just might make it possible.

Harry Atwater thinks his lab can make an affordable device that produces more than twice the solar power generated by today’s panels. The feat is possible, says the Caltech professor of materials science and applied physics, because of recent advances in the ability to manipulate light at a very small scale.

Solar panels on the market today consist of cells made from a single semiconducting material, usually silicon. Since the material absorbs only a narrow band of the solar spectrum, much of sunlight’s energy is lost as heat: these panels typically convert less than 20 percent of that energy into electricity. But the device that ­Atwater and his colleagues have in mind would have an efficiency of at least 50 percent. It would use a design that efficiently splits sunlight, as a prism does, into six to eight component wavelengths—each one of which produces a different color of light. Each color would then be dispersed to a cell made of a semiconductor that can absorb it.


Atwater’s team is working on three designs. In one (see illustration), for which the group has made a prototype, sunlight is collected by a reflective metal trough and directed at a specific angle into a structure made of a transparent insulating material. Coating the outside of the transparent structure are multiple solar cells, each made from one of six to eight different semiconductors. Once light enters the material, it encounters a series of thin optical filters. Each one allows a single color to pass through to illuminate a cell that can absorb it; the remaining colors are reflected toward other filters designed to let them through.

Another design would employ nanoscale optical filters that could filter light coming from all angles. And a third would use a hologram instead of filters to split the spectrum. While the designs are different, the basic idea is the same: combine conventionally designed cells with optical techniques to efficiently harness sunlight’s broad spectrum and waste much less of its energy.

It’s not yet clear which design will offer the best performance, says Atwater. But the devices envisioned would be less complex than many electronics on the market today, he says, which makes him confident that once a compelling prototype is fabricated and optimized, it could be commercialized in a practical way.

Achieving ultrahigh efficiency in solar designs should be a primary goal of the industry, argues Atwater, since it’s now “the best lever we have” for reducing the cost of solar power. That’s because prices for solar panels have plummeted over the past few years, so continuing to focus on making them less expensive would have little impact on the overall cost of a solar power system; expenses related to things like wiring, land, permitting, and labor now make up the vast majority of that cost. Making modules more efficient would mean that fewer panels would be needed to produce the same amount of power, so the costs of hardware and installation could be greatly reduced. “Within a few years,” Atwater says, “there won’t be any point to working on technology that has efficiency that’s less than 20 percent.”

Announcement: Reducing our irreproducibility

ORIGINAL: Nature
24 April 2013

Over the past year, Nature has published a string of articles that highlight failures in the reliability and reproducibility of published research (collected and freely available at go.nature.com/huhbyr). The problems arise in laboratories, but journals such as this one compound them when they fail to exert sufficient scrutiny over the results that they publish, and when they do not publish enough information for other researchers to assess results properly.

From next month, Nature and the Nature research journals will introduce editorial measures to address the problem by improving the consistency and quality of reporting in life-sciences articles. To ease the interpretation and improve the reliability of published results we will more systematically ensure that key methodological details are reported, and we will give more space to methods sections. We will examine statistics more closely and encourage authors to be transparent, for example by including their raw data.

Central to this initiative is a checklist intended to prompt authors to disclose technical and statistical information in their submissions, and to encourage referees to consider aspects important for research reproducibility (go.nature.com/oloeip). It was developed after discussions with researchers on the problems that lead to irreproducibility, including workshops organized last year by US National Institutes of Health (NIH) institutes. It also draws on published concerns about reporting standards (or the lack of them) and the collective experience of editors at Nature journals.

The checklist is not exhaustive. It focuses on a few experimental and analytical design elements that are crucial for the interpretation of research results but are often reported incompletely. For example, authors will need to describe methodological parameters that can introduce bias or influence robustness, and provide precise characterization of key reagents that may be subject to biological variability, such as cell lines and antibodies. The checklist also consolidates existing policies about data deposition and presentation.

We will also demand more precise descriptions of statistics, and we will commission statisticians as consultants on certain papers, at the editor’s discretion and at the referees’ suggestion.

We recognize that there is no single way to conduct an experimental study. Exploratory investigations cannot be done with the same level of statistical rigour as hypothesis-testing studies. Few academic laboratories have the means to perform the level of validation required, for example, to translate a finding from the laboratory to the clinic. However, that should not stand in the way of a full report of how a study was designed, conducted and analysed that will allow reviewers and readers to adequately interpret and build on the results.

To allow authors to describe their experimental design and methods in as much detail as necessary, the participating journals, including Nature, will abolish space restrictions on the methods section.

To further increase transparency, we will encourage authors to provide tables of the data behind graphs and figures. This builds on our established data-deposition policy for specific experiments and large data sets. The source data will be made available directly from the figure legend, for easy access. We continue to encourage authors to share detailed methods and reagent descriptions by depositing protocols in Protocol Exchange (www.nature.com/protocolexchange), an open resource linked from the primary paper.

Renewed attention to reporting and transparency is a small step. Much bigger underlying issues contribute to the problem, and are beyond the reach of journals alone. Too few biologists receive adequate training in statistics and other quantitative aspects of their subject. Mentoring of young scientists on matters of rigour and transparency is inconsistent at best. In academia, the ever increasing pressures to publish and chase funds provide little incentive to pursue studies and publish results that contradict or confirm previous papers. Those who document the validity or irreproducibility of a published piece of work seldom get a welcome from journals and funders, even as money and effort are wasted on false assumptions.

Tackling these issues is a long-term endeavour that will require the commitment of funders, institutions, researchers and publishers. It is encouraging that NIH institutes have led community discussions on this topic and are considering their own recommendations. We urge others to take note of these and of our initiatives, and do whatever they can to improve research reproducibility.

Nature 496, 398 (25 April 2013) doi:10.1038/496398a

domingo, 28 de abril de 2013

Medellín atrae energía creativa de los ingleses

ORIGINAL: El Colombiano
27 de abril de 2013

El Ministro cree que Medellín y Glasgow serán grandes ciudades inteligentes. FOTO JAIME PÉREZ
David Willetts, ministro británico de Universidades y Ciencia, le apuesta a la inclusión en las U, sin límites de edad o estrato social, para ponerlas en motor de desarrollo.

Un país en el que los buenos estudiantes de escasos recursos reciban cartas de felicitación por parte del Gobierno y en el que no sea una novedad ver alumnos mayores de 60 años de edad en las aulas de los campus es el que quiere David Willets, ministro británico de Universidades y Ciencia, quien estuvo esta semana en Medellín.

Este inglés, de mirada escrutadora, oído atento y palabras precisas, es el cerebro detrás de una propuesta educativa que quiere instaurar la universidad como motor del desarrollo de un país cuyas políticas tienden a reducir el gasto público y apuestan por la educación técnica en medio de una golpeada economía. 

Por ello vino a tierras más cálidas, en las que la etiqueta "ciudad más innovadora del mundo" empieza a atraer miradas diferentes a las del escrutinio y el prejuicio, legado del narcotráfico y la violencia, para buscar nuevas maneras de emplear la energía creativa de los ingleses y estrechar la relación entre educación y desarrollo, así como los lazos de cooperación entre ambos países en dichos frentes. 

"Me gustaría ver a investigadores británicos y colombiano trabajando lado a lado, abordando problemas en cuyas soluciones tengamos un interés común -dice Willetts, reforzando sus palabras con gestos precisos de sus manos- podemos, por ejemplo, tratar de comprender juntos la extraordinaria biodiversidad colombiana", añade.

Al respecto habló con el Gobernador de Antioquia, Sergio Fajardo Valderrama, con quien analizó también las fortalezas de nuestro departamento y discutió la manera de dar forma a una relación más estratégica desde la educación.

De ese cruce de ideas nacieron tres conclusiones:
  • tratar de crear puentes entre la Universidad de Antioquia y universidades británicas; 
  • fomentar el emprendimiento a través de grupos de trabajo con expertos de ambas naciones y 
  • hallar la forma en la que Reino Unido pueda contribuir al desarrollo de 80 parques educativos que construirá la Gobernación.

"Tenemos la experiencia y podríamos proveer instalaciones para esos parques educativos -indica el inglés- Podrían ser centros de idiomas o lugares para complementar el aprendizaje en línea con la ayuda de tutores".

Educado para educar 
Más de dos décadas en la política, apostando por aproximaciones académicas y teorías sociales a la hora de trazar sus propuestas, le dejaron como legado el sobrenombre de 'Two Brains' (dos cerebros), una referencia también al tamaño de su cráneo.

"Me imagino que existen peores sobrenombres. 'Dos barbillas' sería mucho peor" respondió alguna vez sonriendo a la periodista Caroline Crampton.

De sus 57 años de edad separó tres (de 2007 a 2010), para ejercer como Secretario de Estado para los Negocios, Innovación y Habilidades del "gabinete sombra" (una figura que garantiza la participación de la oposición en el sistema político inglés), lo que le dio la experiencia para hablar sin tapujos de nuevas formas de educar para el desarrollo.

"Los ingleses, como los colombianos, somos un poco excéntricos y de espíritu libre, por lo que vemos y hacemos las cosas de una manera profundamente diferente, afirma, -sus dedos se juntan y señalan para enfatizar su idea-. De otro lado, el Gobierno tiene un poder muy limitado sobre las universidades, pues les da libertad absoluta para desarrollar sus investigaciones".

Ese consejo y otros más trajo en la bolsa para ofrecer a nuestros gobernantes. De ella extrajo también un argumento que podría dar pistas a Medellín sobre cómo aprovechar más el potencial innovador de sus emprendedores: la amplia experiencia inglesa en el apoyo a empresas emergentes y en la transformación de pequeñas compañías en grandes oportunidades de negocio.

"Al alcalde de Medellín, Aníbal Gaviria Correa, le hablé de una investigación sin precedentes que Reino Unido adelanta en ciudades inteligentes y de la que ustedes pueden aprovechar el increíble flujo de información sobre energía, transporte, medicina, entre otros frentes, que hay en su ciudad, aprovechando los potenciales individuales de su gente, así como las nuevas tecnologías".

A contracorriente 
Tras el fondo azul oscuro de su corbata, tapizado de figuras en tonos tierra regadas desordenadamente sobre la prenda, el Ministro inglés expone trazos de un plan que permiten adivinar a un estratega político que defiende férreamente los intereses de la cartera a su cargo.

Y es que a pesar de que el primer ministro inglés, David Cameron, y el ministro de Negocios y Educación, Matthew Hancock, dejaron claro que la apuesta del Gobierno es favorecer la formación de aprendices, como parte de la estrategia para fortalecer la economía, Willetts se ha asegurado de que los recortes no toquen el presupuesto para las universidades y quiere poner a esas instituciones como núcleo del desarrollo.

"Hemos protegido el presupuesto para la ciencia, que es de 4,6 billones de libras esterlinas anuales, y asegurado apoyo monetario extra para inversión de capital e inversión en tecnología", explica. 

Su trabajo ha permitido que mientras otras carteras ven cómo sus recursos se reducen considerablemente el dinero para las universidades se siga incrementando "a pesar de que este Gobierno tiene un serio problema de liquidez", como él mismo aclara.

"Yo digo a las universidades que por cada libra que pone el Gobierno quiero que ellas encuentren dos libras de inversionistas privados, lo que permitió añadir un billón de libras al sistema en el último año. De ellos, el Gobierno ha puesto 300 millones, mientras que empresas, socios comerciales y hasta la caridad han puesto cerca de 600 millones". 

El resultado: Reino Unido aparece de segundo en el listado mundial de países con mayor calidad en sus claustros universitarios y más fortaleza en sus relaciones comerciales.

Aprobación
Algunas de las propuestas de Willetts lo han hecho blanco de fuertes críticas, entre ellas las de querer llevar a los adultos mayores a la universidad y enviar cartas para animar a estudiantes de escasos recursos con buenos resultados en las pruebas para obtener el Certificado General de Educación Secundaria (Gcse, por sus siglas en inglés), con el fin de animarlos a ir a la universidad.

"Estos estudiantes necesitan que los estimulen, pero requieren más ayuda financiera. En Estados Unidos, cuna de la propuesta original del envío de cartas, un análisis nacional entre estudiantes de secundaria mostró que el 76 por ciento de los que obtuvieron mejores resultados y pertenecían a los estratos bajos, fueron rechazados en las universidades", aseguró la columnista de The Observer, Barbara Ellen

La respuesta de Willetts: un abanico de posibilidades. "Hay préstamos y becas, pero creo que lo importante es que somos buenos en la educación en línea, o lo que nosotros llamamos educación combinada, con algunos cursos virtuales y otros presenciales. Es también importante tener universidades con bajos costos en todo el país", explica.

Respecto a su iniciativa de abrir la posibilidad de hacer préstamos para educación superior a personas de cualquier edad, que estuvo hasta hace poco reservada a menores de 54 años, surgen dudas sobre la disposición de las entidades financieras y el mercado laboral para respaldar tal aventura. 

"Estamos rompiendo con la idea de que sólo puedes ir a la universidad si tienes 18 ó 19 años de edad, tras salir de secundaria. Creemos en una sociedad en la que existan las segundas oportunidades", argumenta David Willetts.

Para respaldar su decisión, mantendrá los ojos en las cifras, las cuales indican, por ahora que de 552.240 estudiantes que iniciaban la universidad el año pasado sólo 1940 eran mayores de 60 años de edad. 

"Creo que es lo correcto. Que las universidades estén disponibles para las personas sin importar la edad que tengan, pues muchas personas maduras sueñan con acceder a la educación superior".

OPCIONES
MEDELLÍN BUSCA FORTALECER INTERCAMBIO
El alcalde de Medellín, Aníbal Gavíria Correa, quiere sacar chispas de la simbiosis entre la experiencia de Reino Unido en innovación y el fortalecimiento de la transferencia en tecnología en Medellín.
  • Entre las expectativas está la búsqueda de alternativas para mejorar la tecnología en empresas y universidades, fomentar la investigación internacional y sacar adelante los proyectos Ruta N y el distrito tecnológico de Sevilla
  • Tras invertir cerca de $40 millones para fortalecer el bilingüismo, Medellín quiere la ayuda de Reino Unido para fomentar el uso masivo de las nuevas tecnologías.
EN DEFINITIVA
Abrir la universidad a todos e incrementar la inversión en ciencia y educación superior son las propuestas a contracorriente, en plena crisis europea, del Ministro inglés de Universidades y Ciencia.