martes, 30 de junio de 2015

Meet Amelia, the AI Platform That Could Change the Future of IT


Chetah Dube. Image credit: Photography by Jesse Dittmar

Her name is Amelia, and she is the complete package: smart, sophisticated, industrious and loyal. No wonder her boss, Chetan Dube, can’t get her out of his head.

My wife is convinced I’m having an affair with Amelia,” Dube says, leaning forward conspiratorially. “I have a great deal of passion and infatuation with her.

He’s not alone. Amelia beguiles everyone she meets, and those in the know can’t stop buzzing about her. The blue-eyed blonde’s star is rising so fast that if she were a Hollywood ingénue or fashion model, the tabloids would proclaim her an “It” girl, but the tag doesn’t really apply. Amelia is more of an IT girl, you see. In fact, she’s all IT.

Amelia is an artificial intelligence platform created by Dube’s managed IT services firm IPsoft, a virtual agent avatar poised to redefine how enterprises operate by automating and enhancing a wide range of business processes. The product of an obsessive and still-ongoing 16-year developmental cycle, she—yes, everyone at IPsoft speaks about Amelia using feminine pronouns—
leverages cognitive technologies to interface with consumers and colleagues in astoundingly human terms,
  • parsing questions, 
  • analyzing intent and 
  • even sensing emotions to resolve issues more efficiently and effectively than flesh-and-blood customer service representatives.


Install Amelia in a call center, for example, and her patent-pending intelligence algorithms absorb in a matter of seconds the same instruction manuals and guidelines that human staffers spend weeks or even months memorizing. Instead of simply recognizing individual words, Amelia grasps the deeper implications of what she reads, applying logic and making connections between concepts. She relies on that baseline information to reply to customer email and answer phone calls; if she understands the query, she executes the steps necessary to resolve the issue, and if she doesn’t know the answer, she scans the web or the corporate intranet for clues. Only when Amelia cannot locate the relevant information does she escalate the case to a human expert, observing the response and filing it away for the next time the same scenario unfolds.

Scientists have built artificial neurons that fully mimic human brain cells


They could supplement our brain function.

Researchers have built the world’s first artificial neuron that’s capable of mimicking the function of an organic brain cell - including the ability to translate chemical signals into electrical impulses, and communicate with other human cells.

These artificial neurons are the size of a fingertip and contain no ‘living’ parts, but the team is working on shrinking them down so they can be implanted into humans. This could allow us to effectively replace damaged nerve cells and develop new treatments for neurological disorders, such as spinal cord injuries and Parkinson’s disease.

Professor Agneta Richter Dahlfors. 
Foto: Stefan Zimmerman
"Our artificial neuron is made of conductive polymers and it functions like a human neuron," lead researcher Agneta Richter-Dahlfors from the Karolinska Institutet in Sweden said in a press release.

Until now, scientists have only been able to stimulate brain cells using electrical impulses, which is how they transmit information within the cells. But in our bodies they're stimulated by chemical signals, and this is how they communicate with other neurons.

By connecting enzyme-based biosensors to organic electronic ion pumps, Richter-Dahlfors and her team have now managed to create an artificial neuron that can mimic this function, and they've shown that it can communicate chemically with organic brain cells even over large distances.

"The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal," said Richter-Dahlfors. "This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored."

This means that artificial neurons could theoretically be integrated into complex biological systems, such as our bodies, and could allow scientists to replace or bypass damaged nerve cells. So imagine being able to use the device to restore function to paralysed patients, or heal brain damage.

"Next, we would like to miniaturise this device to enable implantation into the human body," said Richer-Dahlfors.“We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations."

"Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged," she added.

The results of lab trials have been published in the journal Biosensors and Bioelectronics.

We're really looking forward to seeing where this research goes. While the potential for treating neurological disorders are incredibly exciting, the artificial neurons could one day also help us to supplement our mental abilities and add extra memory storage or offer faster processing, and that opens up some pretty awesome possibilities.


ORIGINAL: Science Alert
By FIONA MACDONALD
29 JUN 2015

viernes, 26 de junio de 2015

New tech tool speeds up stem cell research

It’s hard to do a good job if you don’t have the right tools. Now researchers have access to a great new tool that could really help them accelerate their work, a tool its developers say will revolutionize the way cell biologists developstem cell models to test in the lab.
Fluidigm’s Callisto system
Fluidigm’s Callisto system
Add caption
Fluidigm’s Callisto system
The device is called Callisto™. It was created by Fluidigm thanks to two grants from CIRM. The goal was to develop a device that would allow researchers more control and precision in the ways that they could turn stem cells into different kinds of cell. This is often a long, labor-intensive process requiring round-the-clock maintenance of the cells to get them to make the desired transformation.

Callisto changes that. The device has 32 chambers, giving researchers more control over the conditions that cells are stored in, even allowing them to create different environmental conditions for different groups of cells. All with much less human intervention.

Lila Collins, Ph.D.
, the CIRM Science Officer who has worked closely with Fluidigm on this project over the years, says this system has some big advantages over the past:

Creating the optimal conditions for reprogramming, stem cell culture and stem cells has historically been a tedious and manually laborious task. This system allows a user to more efficiently test a variety of cellular stimuli at various times without having to stay tied to the bench. Once the chip is set up in the instrument, the user can go off and do other things.

Having a machine that is faster and easier to use is not the only advantage Callisto offers, it also gives researchers the ability to systematically and simultaneously test different combinations of factors, to see which ones are most effective at changing stem cells into different kinds of cell. And once they know which combinations work best they can use Callisto to reproduce them time after time. That consistency means researchers in different parts of the world can create cells under exactly the same conditions, so that results from one study will more readily support and reflect results from another.

In a news release about Callisto, Fluidigm’s President and CEO Gajus Worthington, says this could be tremendously useful in developing new therapies:

Fluidigm aims to enable important research that would otherwise be impractical. The Callisto system incorporates some of our finest microfluidic technology to date, and will allow researchers to quickly and easily create complex cell culture environments. This in turn can help reveal how stems cells make fate decisions. Callisto makes challenging applications, such as cellular reprogramming and analysis, more accessible to a wide range of scientists. We believe this will move biological discovery forward significantly.”

And as Collins points out, Callisto doesn’t just do this on a bulk level, working with millions of cells at a time, the way the current methods do:

Using a bulk method it’s possible that one might miss an important event in the mixture. The technology in this system allows the user to stimulate and study individual cells. In this way, one could measure changes in small sub-populations and find ways to increase or decrease them.

Having the right tools doesn’t always mean you are going to succeed, but it certainly makes it a lot easier.


ORIGINAL: California's Stem Cell Agency. Center for Regeneratie Medicine

jueves, 25 de junio de 2015

Here's how we're fighting cancer in a completely new way

Image: Juan Gaertner/Shutterstock.com

Why chemo could be a thing of the past.
This article was written by Mark Cragg from the University of Southampton, and was originally published by The Conversation.

We’re beginning to treat cancer in a whole new way. Rather than killing cancer cells directly with chemo or radiotherapy, the latest treatments are designed to promote the body’s natural immune control over the disease. So-called immunotherapy works to stimulate the body’s own immune system to destroy the cancer. It is not a new concept and was first described more than a century ago, but for the first time it is beginning to deliver long-lasting responses, which some are daring to call cures.

Behind these advances has been a more sophisticated understanding of the relationship between the immune system and cancer, particularly how the cancer is seen as a danger by the body and can disguise itself from immune attack.

The most promising immunotherapies are antibody drugs, which target key switches on immune cells and fall into two main classes:
  • checkpoint blockers such as ipilimumab and nivolumab, which remove the cancer’s ability to switch off the immune system, and 
  • immunostimulators such as anti-CD40 and anti-4-1BB, which promote active immune responses from the body.
Immunotherapy advantages

There are several key reasons why weaponising the immune system in this way shows such promise in the fight against cancer.
  1. First, the immune system is mobile. Its ability to patrol the whole body means it is able to recognise cancer cells wherever they are. And cancer’s ability to spread is frequently the cause of recurrence following other treatments.
  2. Second, the immune system is self-amplifying. It is able to increase its response as required to tackle large, advanced cancers. This property means that it will sometimes work better the more cancer is present, responding to a larger immune stimulation.
  3. Third, the immune system can evolve and adapt to changes in the cancer. Cancers are genetically unstable, meaning that they can change and 'escape' from conventional treatments. This situation is exactly what the immune system has evolved to cope with in its battle with pathogens. So as the tumour changes, the immune system can also change in parallel, keeping the cancer cells locked down.
  4. Fourth, the immune system can recognise an almost limitless number of target molecules on the cancer. This ability to recognise so many targets at once makes it much more difficult for rare variant cancer cells to escape out of immune control by changing their appearance. It also broadens the types of cancer that may be susceptible to immunotherapy.
  5. Finally, the immune system has memory. We see this with infectious diseases, with protection against a second round of infection from a particular germ. This is what provides us with life-long protection from some diseases after catching them as children or receiving vaccinations. For cancer, this means that the immune system can be 'immunised' to the cancer cells and detect and delete them if they try to grow back. Most cancer treatments only work while they are being given: an immune response can last a lifetime.

These five features of immunotherapy combine to deliver major benefits, including the ability to deliver durable, perhaps life-long responses, tantamount to cures, even in advanced, previously fatal cancers.

Future challenges

The challenge now is to understand why some people, and some cancers, respond much better to these therapies than others and how to increase the proportion of people who experience good responses. Data reported only last month shows that combining immunotherapy treatments by giving two checkpoint-blocking antibodies at the same time extends the number of patients with effective and lasting responses. Unfortunately, it also increases the unwanted side effects from immune attack on some of the body’s normal tissues.

While the results from the recent clinical trials are incredibly promising, it is clear that we are just at the beginning of our journey to understand the immune system and harness its power to destroy cancer. We already know that the complex interplay between
  • the genetic make-up of the tumour, 
  • the status of someone’s immune system, and 
  • the interaction between the two will sculpt the immune response in different ways.

How, then, to best boost the immune system? We recognise that large multidisciplinary teams – comprising clinicians, immunologists, molecular biologists, geneticists and others – with concentrated resources are required. In Southampton, this will coalesce around a new purpose-built Centre for Cancer Immunology, which will open in 2017 with the aim of bringing the right people together and providing cutting edge facilities.

With the development of such centres, our understanding of the immune system in health and disease will continue the rapid expansion of immunotherapy, leading to many new opportunities for treatment. Soon these will become more specific, effective and safe – leading us into a new era of cancer treatment.


Mark Cragg is Professor of Experimental Cancer Research at University of Southampton.

This article was originally published on The Conversation. Read the original article.


ORIGINAL: Science Alert
MARK CRAGG, THE CONVERSATION
23 JUN 2015

Chemical Battery Can Recharge Itself With Light


Image: Musthafa Ottakam Thotiyl/IISER Pune
Batteries, by definition, convert chemical energy into electricity. Once you’ve sucked them dry, you have to reverse the process to convert electricity into chemical energy, and for that, you need a source of electricity. It’s not like it’s hard to do this, but it is certainly a minor annoyance that could do with a fix.

Researchers at the Indian Institute of Science Education and Research (IISER) in Pune, India, have skipped the annoying step by developing a battery that charges directly from light. We’re not talking about a battery with a solar panel on it: it’s a “photo battery” where the anode itself is made of titanium nitride and ambient light.



Under artificial light, this prototype battery has a capacity of 77.8 mAh/g. It’ll quite happily power a small fan or LED light for about 30 seconds, and then if you give it a break for 30 seconds while shining a light on it, it’ll be all charged up and good to go again. Over 100 cycles, the battery retained a bit over 70 percent of its discharge capacity, which at least suggests some potential for longevity and usefulness.

In addition to being charged directly by light, which is pretty awesome, this battery design offers other benefits, including “ 
  • a sustainable and economical anode material which will not be consumed as a part of the discharge reactions, and 
  • an anode material that is free from loss of active materials, irreversible structural deformations, spontaneous deinsertion reactions, and safety concerns commonly encountered in the state of the art anode materials in [aqueous rechargeable batteries].
Here we show a surrogate strategy for power production, wherein light is used to actuate a discharge chemistry in the cathode of an aqueous rechargeable battery (ARB). The proposed photo battery consists of a titanium nitride photoanode, promising cathode material iron(III) hexacyanoferrate(II) as the battery active species and Na2S2O8 as the chemical charging agent. The photo battery delivered negligible capacity in the dark and the capacity shot up to 77.8 mAh/g when artificially shined light, confirming that the battery chemistry is light driven. In the ambient light, the device retained 72% of its artificial light discharge capacity with a stable cycling for more than 100 cycles. Further, an unprecedented means for charging the battery rapidly is presented using Na2S2O8 and it revitalized the battery in 30 s without any external bias. This methodology of expending a photoanode extends to a battery that is free from dissolution of active materials, irreversible structural changes, spontaneous deinsertion reactions, and safety concerns commonly encountered in the state of the art anode materials in ARBs. Apart from bringing out a sustainable way for power production, this device opens up avenues for charging the battery in the likely events of electrical input unavailability, while solving the critcial issues of longer charging time and higher charging voltage. Source: http://pubs.acs.org/doi/full/10.1021/acs.jpcc.5b02871.
J. Phys. Chem. C, 2015, 119 (25), pp 14010–14016
DOI: 10.1021/acs.jpcc.5b02871
Publication Date (Web): May 27, 2015
Copyright © 2015 American Chemical Society

According to a press release from the American Chemical Society, “the researchers say their design is a promising first step toward a more sustainable and safer battery technology.” In other words, this is a thing that does cool stuff in a lab right now, but getting your hopes up for a light-powered battery in your cell phone might be premature by a decade or so. For now, the best you’ll be able to do is read the full paper here.

ORIGINAL: IEEE Spectrum
By Evan Ackerman
Posted 24 Jun 2015

miércoles, 24 de junio de 2015

A New Physics Theory of Life

Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.
Katherine Taylor for Quanta Magazine

Why does life exist?
Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

Kristian Peters. Cells from the moss Plagiomnium affine with visible chloroplasts, organelles that conduct photosynthesis by capturing sunlight.

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

Courtesy of Jeremy England


A computer simulation by Jeremy England and colleagues shows a system of particles confined inside a viscous fluid in which the turquoise particles are driven by an oscillating force. Over time (from top to bottom), the force triggers the formation of more bonds among the particles.

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

This situation changed in the late 1990s, due primarily to the work of Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.

Using Jarzynski and Crooks’ formulation, he derived a generalization of the second law of thermodynamics that holds for systems of particles with certain characteristics: The systems are strongly driven by an external energy source such as an electromagnetic wave, and they can dump heat into a surrounding bath. This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. The finding makes intuitive sense: Particles tend to dissipate more energy when they resonate with a driving force, or move in the direction it is pushing them, and they are more likely to move in that direction than any other at any given moment.

“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.



Courtesy of Michael Brenner/Proceedings of the National Academy of Sciences

Self-Replicating Sphere Clusters: According to new research at Harvard, coating the surfaces of microspheres can cause them to spontaneously assemble into a chosen structure, such as a polytetrahedron (red), which then triggers nearby spheres into forming an identical structure.

Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating. He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising.

The chemistry of the primordial soup, random mutations, geography, catastrophic events and countless other factors have contributed to the fine details of Earth’s diverse flora and fauna. But according to England’s theory, the underlying principle driving the whole process is dissipation-driven adaptation of matter.

This principle would apply to inanimate matter as well. “It is very tempting to speculate about what phenomena in nature we can now fit under this big tent of dissipation-driven adaptive organization,” England said. “Many examples could just be right under our nose, but because we haven’t been looking for them we haven’t noticed them.”

Scientists have already observed self-replication in nonliving systems. According to new research led by Philip Marcus of the University of California, Berkeley, and reported in Physical Review Letters in August, vortices in turbulent fluids spontaneously replicate themselves by drawing energy from shear in the surrounding fluid. And in a paper appearing online this week in Proceedings of the National Academy of Sciences, Michael Brenner, a professor of applied mathematics and physics at Harvard, and his collaborators present theoretical models and simulations of microstructures that self-replicate. These clusters of specially coated microspheres dissipate energy by roping nearby spheres into forming identical clusters. “This connects very much to what Jeremy is saying,” Brenner said.

Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said. Condensation, wind and viscous drag are the relevant processes in these particular cases.

“He is making me think that the distinction between living and nonliving matter is not sharp,” said Carl Franck, a biological physicist at Cornell University, in an email. “I’m particularly impressed by this notion when one considers systems as small as chemical circuits involving a few biomolecules.”



Wilson Bentley

If a new theory is correct, the same physics it identifies as responsible for the origin of living things could explain the formation of many other patterned structures in nature. Snowflakes, sand dunes and self-replicating vortices in the protoplanetary disk may all be examples of dissipation-driven adaptation.

England’s bold idea will likely face close scrutiny in the coming years. He is currently running computer simulations to test his theory that systems of particles adapt their structures to become better at dissipating energy. The next step will be to run experiments on living systems.

Prentiss, who runs an experimental biophysics lab at Harvard, says England’s theory could be tested by comparing cells with different mutations and looking for a correlation between the amount of energy the cells dissipate and their replication rates. “One has to be careful because any mutation might do many things,” she said. “But if one kept doing many of these experiments on different systems and if [dissipation and replication success] are indeed correlated, that would suggest this is the correct organizing principle.”

Brenner said he hopes to connect England’s theory to his own microsphere constructions and determine whether the theory correctly predicts which self-replication and self-assembly processes can occur — “a fundamental question in science,” he said.

Having an overarching principle of life and evolution would give researchers a broader perspective on the emergence of structure and function in living things, many of the researchers said. “Natural selection doesn’t explain certain characteristics,” said Ard Louis, a biophysicist at Oxford University, in an email. These characteristics include a heritable change to gene expression called methylation, increases in complexity in the absence of natural selection, and certain molecular changes Louis has recently studied.

If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,” Louis said.

“People often get stuck in thinking about individual problems,” Prentiss said. Whether or not England’s ideas turn out to be exactly right, she said, “thinking more broadly is where many scientific breakthroughs are made.”

Emily Singer contributed reporting. This article was reprinted on ScientificAmerican.com and BusinessInsider.com.


Correction: 
This article was revised on January 22, 2014, 
to reflect that Ilya Prigogine won the Nobel Prize in chemistry, not physics.


ORIGINAL: Quanta Magazine
By: Natalie Wolchover
January 22, 2014

martes, 23 de junio de 2015

New Kinds of Battery That Just Might Change the World

3 New Kinds of Battery That Just Might Change the World

We used to think that technology was about devices. We were wrong. Those feeble plastic and glass exoskeletons are nowhere near as important as the batteries that power them. Which is why the race to a better battery is fueled by insane hype—threaded with genuine innovation.

The market for a better battery is potentially enormous. Yet as our gadgets and cars have evolved, the batteries powering them have remained pretty much unchanged. And while the press is full of reports of eureka-moment “breakthroughs,” it’s turned out to be remarkably difficult to commercialize any of this new technology on a broader scale, as journalists like Kevin Bullis and Steve LeVine have chronicled (more on that later). Making battery magic in a lab is one thing. Figuring out how to reproduce that magic safely, in a factory, millions of times over, at a price that’s competitive? That’s another.

Yet the race continues: Electric car makers are looking for cheaper, lighter, more powerful and durable cells. Electronics makers are looking for more reliable cells that can charge faster and last longer. For makers of medical implants and even wearable technology, it’s a battery small enough to “disappear.” Meanwhile, renewable energy companies are looking for batteries that can charge and discharge thousands and thousands of times and remain stable.

The breakthroughs that we seem to hear about on a weekly basis are real. But there’s an increasingly apparent gap between a breakthrough and its adoption. I looked into three areas of buzz-y battery research to find out how close they are to—as that tired old adage goes—truly changing the world.

The Solid State Let’s start with an emerging technology that does away with a very dangerous problem with current lithium ion batteries: Their enthusiasm for bursting into flame without warning. These are called solid state batteries—there are many types—and to understand how they avoid instantaneous conflagration, it helps to know a bit about why this phenomenon occurs in lithium ion batteries in the first place.

Most conventional lithium ion batteries are made of up two electrodes (the anode and cathode), separated by some sort of liquid electrolyte, or the medium that conducts the lithium-ions moving from anode to cathode. The problem is that this electrolyte is very flammable—if it’s damaged or punctured, the battery will catch fire. Leading to things like, uh, this:



Solid state batteries do away with the liquid electrolyte altogether. Instead, they use a layer of some other material, usually a mixture of metals, to conduct ions between the electrodes and create energy.

But that’s only half the reason solid state technology is so exciting. Because there’s no liquid component in these cells—and because they require fewer extra layers of insulation and other safeguards—they tend to be smaller, lighter, and more adaptable than their fire-happy predecessors. That makes them very interesting to carmakers looking for a lighter, safer battery for their electric vehicles. The Department of Energy’s Advanced Research Projects Agency-Energy, or ARPA-E, is running multiple projects to either develop solid state lithium ion batteries, or solid state batteries that do away with lithium altogether.

Then there’s a leader in solid state, Sakti3, an 8-year-old company based in Ann Arbor headed up by CEO Ann Marie Sastry. A profile from MIT Technology Review’s Kevin Bullis gives us a glimpse into the work Sakti3 and Sastry are doing, which focuses on figuring out how to build solid state lithium ion batteries at scale:

She is also developing manufacturing techniques that lend themselves to mass production. “If your overall objective is to change the way people drive, your criteria can no longer only be the best energy density ever achieved or the greatest number of cycles,” she says. “The ultimate criterion is affordability, in a product that has the necessary performance.

Sakti3’s work sounds exciting, but the company has been extremely secretive about its technology, so we don’t know exactly what it uses as its electrolyte—which could certainly end up affecting the cost or manufacturability of these batteries on a larger scale. We do know Sakti3 has attracted investments from major players, including GM’s venture arm, and claimed last year that it had doubled the energy density of the average lithium ion battery. Another solid state company, QuantumScape, is similarly quiet—but is rumored to be working on similar ideas with solid state tech.

So, why aren’t we riding around with solid state batteries under our hoods? It’s still fairly early days for commercializing on that scale. One of the biggest challenges with battery tech isn’t just the electrochemical secret sauce, it’s replicating that secret sauce in a factory, for a price lower than that of conventional cells, with greater regularity, at massive scale.

It’s a paradigm that the author Steve LeVine knows well. LeVine’s new book The Powerhouse, published this spring, is a deep dive into the rise—and fall—of a company attempting to commercialize just one of those Eureka-Game-changing-Aha-Moment-Battery-Innovations. He spent years following Envia, a battery startup that eventually secured a contract with GM to supply its cathodes, made from nickel, manganese, and cobalt, to power GM’s Volt. Until it all fell apart when the cathodes didn’t perform the way Envia claimed they would.

As LeVine explained to me on a recent call—and as he echoed in a story in Quartz this week, the most exciting thing in battery tech right now isn’t the battery. It’s the manufacturing process. “I’ve gotten very excited about what’s possible by figuring out how to bring down costs through manufacturing breakthroughs,” he said, pointing out that the Department of Energy is now focusing on staging competitions that ask entrants to focus on innovating the manufacturing process rather than the electrochemical science of the batteries themselves. “I think that’s the place to watch,” he added.

The Tesla Gigafactory under construction in March, via the Tesla Forum.
Even Elon Musk is trying to solve this particular problem. His Gigafactory, which is currently underway in Nevada, is a massive bet on the idea that Tesla can beat out its competitors simply by putting the entire battery manufacturing process under one roof. Keep in mind, this is for batteries that aren’t particularly groundbreaking. But this game is about economies of scale—and even Musk is enduring criticism that his battery factory might be obsolete before it opens as other breakthroughs in battery tech emerge. That’s a big and polemical theoretical, but it helps illustrate how mercurial the battery industry is right now.

The Aluminum Air

Even though lithium is the king of battery materials, it has plenty of other drawbacks besides bursting into flames. Not only is it expensive to mine, but it’s less efficient than some other materials at releasing electrons, as Chemistry World recently explained, which makes it slower to charge and discharge.

So, what about batteries that don’t need any lithium at all, some of which could charge your phone in seconds—at least theoretically? An Israeli company named Phinergy has talked up one exciting but fraught contender over the past few years: An aluminum air battery. In these batteries, one electrode is an aluminum plate. The other is oxygen. More specifically, oxygen and a water electrolyte. When the oxygen interacts with the plate, it produces energy.

Aluminum air batteries have been around for a long time, though interest in them has intensified over the last few years. A much-cited 2002 study from the Journal of Power Sources brought it into the spotlight, when a group of researchers argued that aluminum-air batteries are the only feasible replacement for gasoline. In theory, these batteries could have 40 times the capacity of lithium ion batteries, and Phinergy says they could extend the range of EVs to 1,000 miles.


So, it’s time to ask again: Why aren’t we all driving around in oxygen-powered cars? Well, the chemical reaction that produces energy in these batteries also happens to come with a considerable drawback. As it interacts with the oxygen, the aluminum degrades over time. It’s a type of battery called a “primary” cell, which means current only flows one way, from the anode to the cathode. That means they can’t be recharged. Instead, the batteries have to be swapped out and recycled after running down.

That’s a big infrastructure problem when it comes to widespread use. “For EVs that might be an okay situation once the infrastructure is in place for service stations to swap out new and used batteries from vehicles,” explained University of Michigan Battery Lab’s Greg Less via email. “But until that occurs, a secondary [rechargeable] cell, like Lithium-Ion will be preferable.” Aluminum air batteries certainly wouldn’t be feasible for gadgets, because they would need to have their batteries swapped out regularly.

Still, research is continuing on aluminum air, and there are several companies claiming they’ll bring it to market within the next few years, including Phinergy. A company called Fuji Pigment also claimed recently that it had made a huge leap forward. Fuji says that it’s figured out a way to protect the aluminum with insulating materials, so it would be able to recharge without being swapped.


Even if the aluminum air contenders fail, researchers are increasingly pointing towards aluminum as the battery material of the future. It’s a hot field right now: Just while I was writing this article, another piece of battery news was announced—this one from a lab at Stanford that uses aluminum and graphite as electrodes, connected by a safe liquid electrolyte. The group at Stanford says their battery can charge a smartphone in under a minute and can be “drilled through” and still remain functional. Of course, more research remains to be done.

The Microbattery

Another major issue with conventional batteries is their size. While almost every other part of our electronics get smaller, batteries are still pretty hefty. For example, the newest Apple laptop is defined by its battery size—which, even though it’s designed in a super-efficient tiered structure, still takes up most of the space in the body.

This is a problem that goes way beyond laptops, though. Think of medical implants, which need a power supply small enough to sit inside the human body. Or ambitious long-term airborne craft projects like Solar Impulse, which need feather-light batteries to store energy. Finally, what about Project Jacquard, which seeks to wire computers into our very clothing—hopefully without a pound of lithium tucked into a pocket.

More and more research is focusing on what are called “3D” microbatteries. What’s the difference between 2D and 3D? Well, think of a 2D version as a simple sheet cake: There are two electrodes, separated by an electrolyte. These can get super-thin, but you’re limited to a very thin cake with a pretty low power output.

In comparison, a 3D battery is more like a roll cake (ok, it’s an imperfect metaphor) where you can increase the surface area of the electrodes by tightly interlocking them in microscopic layers. By increasing the surface area, you make it easier for ions to travel from one electrode to the other—which increases the battery’s power density, or the rate at which it charges and discharges.


Scientists are exploring many ways to manufacture these tiny wonders. In 2013, a team from Harvard used a 3D printer to get the extreme precision needed to intertwine nano-sized anodes and cathodes using a lithium “ink.”

But more recently, a team from University of Illinois published a paper showing how they used a technique called holographic lithography to make a 3D battery. In it, super-precise optical beams are used to create a 3D structure—in this case, the electrodes—out of a photoresist (think of it as a three-dimensional unexposed negative) which in turn become the battery itself. Why is this better than 3D printing? Well, for one thing, holographic lithography isn’t as nascent as 3D printing, so it may have more promise when it comes to scaling up.


However, like all batteries, there’s a tradeoff here between power density, the rate that a battery produces energy, and energy density, the overall capacity of a battery—as GizMag’s Brian Dodson explained in a post about the research. It’s tough to be good at both of those things, but that’s exactly what the Illinois team is trying to do. If they succeed at commercializing their tech, it could be big. Again, that’s a mighty “if.”

Indeed, one of the paper’s authors, UI professor William King, told Gizmodo via email that the big hurdle now is figuring out how to turn this into a commercial technology.Since our first article was published on this technology, we’ve managed to increase the battery energy density by about a factor of 3, by using new, higher energy materials,” he said. Still, “the key challenge is manufacturing scale-up, which we have been working on diligently.”

What’s Going on Inside?

One of the problems with replicating a breakthrough in a lab is that often, we don’t really know what’s happening inside the battery itself. This sounds simple, but it’s a massive challenge and arguably the biggest thing holding up battery innovation: We can’t actually observe what’s going on at a molecular level. It’s why so many battery breakthroughs seem to be accidental or unexplainable—and why they fall flat when their inventors can’t reproduce the same effects in a controlled way.

So I talked with one researcher who isn’t focusing on building batteries—he’s focusing on seeing inside of them. Michael Toney, of the SLAC National Accelerator Laboratory, is leading the way towards actually observing what’s happening inside a battery without cracking it open or disturbing the process.

Toney and his colleagues are using spectroscopic imaging and nanoscale x-rays to understand exactly what’s happening inside, say, a lithium ion battery when it’s charging. As Toney told me, the ultimate goal is to be able to view what’s happening on an atomic level. For now though, his team can view the chemical processes to determine how, for example, an anode might be leading to voltage fade, or a gradual loss of energy over time.

Eventually, Toney says the same technology could lead to software that can realistically tell you how your battery is doing—not just guess, as your phone’s little bar system does now. But that’s small potatoes compared to being able to see how batteries actually work. Because the strangest thing about the race to build a battery than can replace fossil fuels isn’t just that there are so many contenders—it’s that knowing why they succeed or fail is so incredibly hard.

While we want a breakthrough battery to be as simple as a successful experiment, it increasingly seems like finding it will be a long, incremental research effort that will see many successes and failures before all is said and done. After all, this is the Infrastructure Age. Don’t expect it to end before it even begins.

Contact the author at kelsey@Gizmodo.com.


ORIGINAL: Gizmodo
Kelsey Campbell-Dollaghan
6/23/15

Here Is the World's First Engine Driven by Nothing But Evaporation. An engine with living parts

Bioengineers invent a way to harvest energy from water evaporating at room temperature. It's an engine with living parts.


Joe Turner Lin

It might not look like much, but this plastic box is a fully functioning engine—and one that does something no other engine has ever done before. Pulling energy seemingly out of thin air, it harvests power from the ambient evaporation of room-temperature water. No kidding.

A team of bioengineers led by Ozgur Sahin at Columbia University have just created the world's first evaporation-driven engine, which they report today in the journal Nature Communications. Using nothing more than a puddle of resting water, the engine, which measures less than four inches on each side, can power LED lights and even drive a miniature car. Better yet, Sahin says, the engine costs less than $5 to build.


"This is a very, very impressive breakthrough," says Peter Fratzl, a biomaterial researcher at the Max-Planck Institute of Colloids and Interfaces in Potsdam, Germany who was not involved in the research. "The engine is essentially harvesting useful amounts of energy from the infinitely small and naturally occurring gradients [in temperature] near the surface of water. These tiny temperature gradients exist everywhere, even in some of the most remote places on Earth."


An engine with living parts

To understand how the engine works, it helps to understand unique material behind it.

The key to Sahin's astonishing new invention is a new material that Sahin calls HYDRAs (short for hygroscopy-driven artificial muscles). HYDRAs are essentially thin, muscle-like plastic bands that contract and expand with tiny changes in humidity. A pinky finger-length HYDRA band can cycle through contraction and expansion more than a million times with only a slight, and almost negligible, degradation of the material. "And HYDRAs change shape in really quite a dramatic way: they can almost quadruple in length," Sahin says.

Figure 2: Hygroscopy-driven artificial muscles.
From Scaling up nanoscale water-driven energy conversion into evaporation-driven engines and generators
Xi Chen,
Davis Goodnight,
Zhenghan Gao,
Ahmet H. Cavusoglu,
Nina Sabharwal,
Michael DeLay,
Adam Driks
& Ozgur Sahin Nature Communications 6,Article number:7346
doi:10.1038/ncomms8346

Xi Chen
The idea for the HYDRA material came to Sahin more than half a decade ago, when he came across an unusual find in nature. While studying the physical properties of micro-organisms with advanced imaging techniques, he discovered that the spore of the very common grass bacillus bacteria responds in a strange way to tiny amounts of moisture. Although the dormant spore has almost no metabolic activity and does no physical work, its outer shell can soak up and exude ambient levels of evaporated water—expanding and shrinking while doing so.

"The spores stay very rigid as they expand and contract in response to humidity," Sahin says. "That rigidity means their movements come with a whole lot of energy."

Add caption

After many experiments, Sahin found a way he could mimic the spore's unique response. To make HYDRAs, he actually paints the spores onto plastic strips using a laboratory glue. By painting dormant spores in altering patches on both sides of a single strip, the pulsating spores cause the plastic to flex and release in a single direction in response to moisture—just like a spring expanding and contracting.

While a material made of living creatures may sound like it should have a short lifespan, Fratzl says that, in fact, HYDRAs are "likely to last for a very, very long time," he says. "In nature, it's absolutely critical that these spores survive from decades to even hundreds of years in dormancy, all while responding to outside humidity in this dramatic way without breaking down."
The inner workings

How do you go from spores on strips to a working engine? 
The engine is placed over a puddle of room-temperature water, creating a small enclosure. As the water on the surface naturally evaporates, the inside of the engine becomes slightly more humid. This triggers strips of HYDRAs to expand as they soak up some of the new-found humidity. Collectively, these HYDRAs pull on a cord which is attached to a small electromagnetic generator, transforming the cord's movement into energy. The HYDRAs also pull open a set of four shutters on top of the engine, releasing the humid air. With the shutters open, humidity inside the engine drops. This causes the HYDRAs to shed their water-vapor and contract, which pulls the shutters back closed. And the process repeats, just like an engine's cycle.

Sahin has found that the engine works at room temperature (around 70 degrees Fahrenheit) with water that's at a wide range of temperatures—from 60 to 90 degrees F. Because water naturally evaporates faster at higher temperatures, hotter water works best.
  • With 60-degree (15.5°C) water, the engine will open and close its shutters once every 40 seconds. 
  • At 70 degrees (21.1°C), it does so every 20 seconds
  • At 90 degrees (32.2°C), it's every 10.

Sahin also created a second engine with his HYDRAs—this one a turbine-style creation that uses the motion of bending HYDRAs to spin a wheel. Placed on top of a miniature car, the entire device slowly ekes forward—again, powered by nothing but evaporating water.




More than a toy
On average, each pull of the engine creates roughly 50 microwatts. That's a tiny amount of energy, but it's enough to generate light with an LED by harvesting the energy of a puddle of water that's doing nothing but existing at room temperature. Sahin also says that the materials used to make the engine are extremely cheap. Even including the HYRDAs, he says it should cost less than $5 to put together.

There is plenty of room for improvement, too. For one thing, he says, each HYDRA band uses just 1 percent of energy potential of the bacteria spores. A HYDRA-like material that could make better use of the spores would radically increase usefulness of the device. In fact, Sahin says he already developed another material that could tap into one-third of the spores' energy potential, but it proved an absolute nightmare to finagle that material into a long-lasting engine.

For now, the evaporation engine is just a proof of concept meant to show that this unique type of energy generation really can be accomplished. Whether future devices will ever be able to compete with other renewable energy sources, such as wind or solar energy collection, may be a question that won't even be answerable for decades. But the promise is there, he says. Just consider the way the planet works: "The power in wind on a global scale primarily comes from evaporation," he says, "so there's more power to be had here than there is in the wind."


ORIGINAL: Popular Mechanics
By William Herkewitz

The Best Design of the Year (Maybe Ever?)

Every year, the Design Museum in London picks a single object and names it the best design of the year. It’s pretty bad sometimes! But this year, the museum picked a winner: A chip that replaces animal test subjects with a complex package of human cells.




It’s called a lung-on-a-chip--a name that is very literally true, lest you think this is simply a computer chip programmed to mimic a lung. It comes from Harvard’s Wyss Institute for Biologically Inspired Engineering, which explains in a great video how it works.



This clear, simple-looking brick of plastic actually contains complex human cells, arranged in a simplified version of the way a lung works: Along the central channels, there’s a lining of human lung cells separated from a lining of capillary blood cells by a porous membrane, just like the air sacs in your lung:


On each side, channels create the flexing movement that an air sac does while you breathe.


In other words, it’s all of the biological complexity of your lungs distilled onto a computer chip.

Scientists can, for example, introduce bacteria to the channels to mimic an infection—and white blood cells in the capillary channel will attack. Or, they can introduce the chemicals you breathe in regularly to mimic air pollution and its affect on your lungs. Or test new medications.

Bio-inspired micro-devices that mimic whole human organs, such as the lung on a chip, could potentially replace animal testing and bring new therapies to patients faster and at lower cost in the future,” the design team explains in their video. Other labs are working on organs like the heart and even spleen, and Wyss’ ultimate goal is to build ten different organs and link them to create a whole body.

Who do we have to thank for bringing news of the chip to the design world? That would be Paola Antonelli, MoMA’s Senior Curator of Architecture & Design, as Dezeen points out today in its announcement as the award’s media partner. Antonelli not only nominated the chip, she already added it to MoMA’s permanent collection in March, writing on MoMA’s blog:

Esoteric or specialized, perhaps, but universally remarkable in their balance of form, function, and vision, investigations like the Wyss Institute’s Human Organs-on-Chips demonstrate new, radical intersections of synthetic biology and design.

In the past, the Design Museum’s pick have ranged from anodyne at best—a lightbulb, in 2011—to downright tone-deaf, like the jury’s choice of a Zaha Hadid building in Azerbaijan built by a dictatorial regime and named for a president known for his human rights abuses. This year, the jury really turned it around, selecting an object that is not only a brilliant piece of design, but also has the power to end the barbaric practice of animal testing while helping human patients.

Antonelli deserves a lot of credit for caring what’s happening in science, medicine, and technology, and forcing the rest of the design world to broaden insular, myopic field of view to include objects that aren’t just lightbulbs and billion-dollar museums, great though they are. Design—while it won’t save the world—can certainly change it for good.

Contact the author at kelsey@Gizmodo.com.

ORIGINAL: Gizmodo

lunes, 22 de junio de 2015

The story of the invention that could revolutionize batteries—and maybe American manufacturing as well

This black goop is what will be at the heart of the next generation of batteries.(Kieran Kesner for Quartz)
The world has been clamoring for a super-battery.
Since about 2010, a critical mass of national leaders, policy professionals, scientists, entrepreneurs, thinkers and writers have all but demanded a transformation of the humble lithium-ion cell. Only batteries that can store a lot more energy for a lower price, they have said, will allow for affordable electric cars, cheaper and more widely available electricity, and a reduction in greenhouse gas emissions. In the process, a lot of gazillionaires will be created.

But they have been vexed. Not only has nobody created a super-battery; a large number of researchers have lost faith in their powers to do so—perhaps ever. Entrepreneurs such as Tesla’s Elon Musk continue to tinker with off-the-shelf batteries for luxury electric cars and home power-storage systems, but industry hands seem generally to doubt that their cost will drop enough to attract a mass market any time soon. Increasingly, they are concluding that the primacy of fossil fuels will continue for decades to come, and probably into the next century.

This is where Yet-Ming Chiang enters the picture. A wiry, Taiwanese-American materials-science professor at the Massachusetts Institute of Technology (MIT), Chiang is best known for founding A123, a lithium-ion battery company that had the biggest IPO of 2009. The company ended up filing for bankruptcy in 2012 and selling itself in pieces at firesale prices to Japanese and Chinese rivals. Yet Chiang himself emerged untainted.

In 2010, having rounded up $12.5 million from Boston venture capital firms and federal funds, Chiang launched another company. Again, it was in batteries. And today, after five years in “stealth mode,” he is going public. There may be a way to revolutionize batteries, he says, but right now it is not in the laboratory. There may be a way to revolutionize batteries, but right now it is not in the laboratory. Instead, it’s on the factory floor. Instead, it’s on the factory floor. Ingenious manufacturing, rather than an ingenious leap in battery chemistry, might usher in the new electric age.

When it starts commercial sales in about two years, Chiang says, his company will slash the cost of an entry-level battery plant 10-fold, as well as cut around 30% off the price of the batteries themselves. That’s thanks to a new manufacturing process along with a powerful new cell that adds energy while stripping away cost. Together, he says, they will allow lithium-ion batteries to begin to compete with fossil fuels.

But Chiang’s concept is also about something more than just cheaper, greener power. It’s a model for a new kind of innovation, one that focuses not on new scientific invention, but on new ways of manufacturing. For countries like the US that have lost industries to Asia, this opens the possibility of reinventing the techniques of manufacture. Those that take this path could own that intellectual property—and thus the next manufacturing future.

This is the story of how that came about.
24M batteries.(Kieran Kesner for Quartz.)

Manufacturing, the new frontier of innovation
Traditionally, big innovations have happened at the lab bench. A discovery is made and patented, then is handed off to a commercial player who scales it up. With luck, it turns out a blockbuster product.

But, according to a report published in February by the Brookings Institution, researchers are increasingly skeptical of the delineation between innovation and production. Breakthrough-scale invention, they say, happens not only in the lab, but also in factories.

This is not a new idea. Until 1856, for instance, steel was an ultra-expensive niche product. It was far more robust than iron, but no one knew how to make it economically. Its use was confined to specialty hand tools and eating utensils for the rich. But then British inventor Henry Bessemer, stirred by French gripes about the fragility of cast-iron cannons, devised a process that reduced the cost of steel by more than 80%, roughly equivalent to iron. Steel—along with oil—went on to propel the latter part of the Industrial Revolution, along with the gargantuan 20th century economic boom.

If Bessemer had made his breakthrough today, it would be called “advanced manufacturing”—a label that has been broadly applied to next-generation fabrication methods such as
There is some hype around this term: The Brookings report identifies 50 industries in the US alone as “advanced,” and historic factory hubs such as the English city of Sheffield are renaming themselves as variants of “advanced manufacturing cluster.”
Nonetheless, entrepreneurs who develop genuinely novel manufacturing processes can enjoy the advantage of a patent and standing ahead of the crowd. While others will inevitably copy them, it will be a race to catch up. To the degree that such authentic advanced manufacturing moves forward, and can offer the US a chance to reinstate its prowess as a manufacturing hub, it’s led in part by a few clean energy companies like Yet-Ming Chiang’s.
Yet-Ming Chiang, 24M’s founder.(Kieran Kesner for Quartz)

The birth of an idea

At 57, Chiang has short-cropped, gray-flecked black hair, and almost always wears blue, long-sleeved check shirts. He speaks in a soft, even cadence, and is prone to finishing his sentences with a disarming, open-jawed grin.

But if unassuming, Chiang is also tremendously driven. His science-centered business sense has earned tens of millions of dollars for his investors. He and his family live on a farm on the affluent outskirts of Boston, where he raises bees and chickens, and hunts and fishes nearby.