viernes, 23 de octubre de 2015

Robot With Tummy Full of Microbes Can Swim in Dirty Water Forever


Image: University of BristolRow-bot with mouth open (inset shows mouth closed).
Robots are better than animals in almost every way. Well, they’re better in some ways, I guess. I mean, robots are occasionally okay at some things. A few things. None of those things are energetic autonomy: the ability to operate continuously and indefinitely without dependence on humans for refueling. There certainly are robots that operate autonomously for long durations, and they’re either feeding off of radioactivity, or they’re relying on solar panels that don’t work half the time. A better option (at least in some situations) might be robots that forage for food like animals do, taking care of their own energy needs all by themselves.

This is only a slightly crazy idea (although at one point it was briefly the craziest idea ever), and fuel cells that are full of living microbes are a real thing. At the Bristol Robotics Laboratory, in the United Kingdom, they’ve been developing a robot called Row-bot that can swim around, harvesting energy directly from the water using a microbial fuel cell as an artificial stomach.

According to the researchers, Microbial Fuel Cells (MFCs) generate electricity by “electrons mobilised by the redox reaction that takes place in electrogenic bacterial anabolism.” More specifically, in the case of their device, they explain that “raw organic biomass is used as both an inoculant for the bacterial culture and the anolyte that fuels the reaction resulting in an environmentally biocompatible means of electricity generation.” In other words (simpler ones), microbes eat stuff in the water and poop out electrons, and as long as you’ve got enough water with stuff in it to keep the microbes fat and happy, they’ll keep giving you electrons that you can use to make your robot do things. MFCs work in all kinds of water, including fresh water in rivers and lakes, seawater, and even waste water, and they actually clean the water as they go, which is nice.

Microbes are kind of tiny, and each one doesn’t produce a lot of energy, so to do anything useful, you either need a whole bunch of them (multiple fuel cells) or a very efficient robot. Row-bot is very efficient, modeled on a water beetle. It has two side paddles to move, little floaty feeties to keep it from drowning, and a microbial fuel cell in its tummy:


It has a mouth, too, that it can open to ingest water for the fuel cell, and also a fuel outflow port on its posterior end that we’d call its butt if we were immature, which we’re not. Each time the robot opens its mouth, it swims forward, ingests fresh water into its MFC tummy, digests for 3 minutes, and then expels the water out the back as it swims forward again, making room for a fresh gulp. Row-bot stores the energy generated by the MFC in a capacitor, and over one cycle (opening its mouth, swimming 10 strokes forward at just under 1 stroke per second and then closing its mouth), it only uses 1.8 joules. That’s 20 centimeters of motion with about 1 joule of energy leftover that could potentially be used to power sensors or laser turrets or something. 

As long as Row-bot has water to swim through, the MFC makes it more or less energetically autonomous, although the current design is mostly a testbed for integration of the MFC with actuators to see how well it works. There’s a lot more optimization that needs to happen, like reducing body drag and finding the most efficient combination of materials to use for the paddles, as well as altering the “stride” of the robot to better mimic actual water beetles. Also, multiple MFCs could be configured in series if you need more power for those aforementioned laser turrets.

Eventually, the researchers suggest that Row-bot could be developed for applications such as remote sensing and environmental monitoring and clean-up, although space exploration isn’t out of the question either.

Also, it’s called “Row-bot.” Get it? Row-bot the robot. Heh.

Row-bot: An Energetically Autonomous Artificial Water Boatman,” by Hemma Philamore, Jonathan Rossiter, Andrew Stinchcombe, and Ioannis Ieropoulos from the University of Bristol and University of the West of England, was presented at IROS 2015 in Hamburg, Germany.

Learn More fuel cell iros iros 201

ORIGINAL: IEEE Spectrum
By Evan Ackerman
Posted 22 Oct 2015

These Vertical Farms Turn Unused City Wall Space Into Gardens That Grow Your Lunch

Living walls have been around for a while, but until now they haven't been used to grow food.
In most cities, there aren't enough community garden plots to go around.
One urban farming company thinks cities have another resource: walls.
Bright Agrotech, a Wyoming-based vertical farming company, designs lightweight hydroponic farm systems that can attach to any unused wall space along sidewalks or behind buildings.

"Vertical surfaces are really one of the most undervalued types of real estate in the world," says Nate Storey, CEO of Bright Agrotech. "Basically all you can use them for is advertising."
While living walls have been around for a while, they aren't usually used for growing food, and they're usually expensive. Bright Agrotech's system is still a little pricey—a small system starts at $569—but they're working to make it accessible to anyone who wants to grow food.
 They also designed the system to be simple enough for non-gardeners to master.
Because the wall is self-watering, and the design eliminates weeds, the company calculates that it takes less time to maintain than an ordinary garden. Every five or six weeks, you can harvest around 40 pounds of greens.
In most cities, where any available land tends to be quickly snatched up by developers, it can be hard for would-be urban farmers without backyards to find a place to plant crops. But if there aren't enough community garden plots to go around, one urban farming company thinks cities have another resource: walls.

Bright Agrotech, a Wyoming-based vertical farming company, designs lightweight hydroponic farm systems that can attach to any unused wall space along sidewalks or behind buildings.


"Vertical surfaces are really one of the most undervalued types of real estate in the world," says Nate Storey, CEO of Bright Agrotech. "Basically all you can use them for is advertising."

While living walls have been around for a while, they aren't usually used for growing food, and they're usually expensive. Bright Agrotech's system is still a little pricey—a small system starts at $569—but they're working to make it accessible to anyone who wants to grow food and bring the cost down further.

"What we're really focused on is decentralizing and democratizing agriculture," Storey says. "Getting produce from the grower to the consumer as quickly as possible, as efficiently and as locally as possible."

The company also designed the system to be simple enough for non-gardeners to master.

"Traditional living walls or green walls are typically pretty bulky, hard to set up, and hard to maintain," he says. "Our goal is to put together something that's really simple and really easy to use." Because the wall is self-watering and the design eliminates weeds, the company calculates that it takes less time to maintain than an ordinary garden. Every five or six weeks, you can harvest around 40 pounds of greens.

The same system can also be used indoors, but the company sees some advantages to plastering plants on the sides of buildings. "I think that agriculture is not just functional but beautiful," Storey says. "I really feel strongly that humanity we weren't designed to live in the concrete jungle. Living in these bleak cityscapes takes a toll on our psyche...being able to take agriculture to the sides of buildings, and change the landscape is going to be a powerful thing not just for food production, but for people."

Ultimately, he thinks that future cities could double as massive gardens. "I see a future where cities are green, and not grey," he says. "We're growing on the outsides of buildings, we're growing on the insides of buildings. We're growing anywhere we can grow."

ORIGINAL: Fast Co Exist
October 23, 2015


Adele Peters is a staff writer at Co.Exist who focuses on sustainable design. Previously, she worked with GOOD, BioLite, and the Sustainable Products and Solutions program at UC Berkeley. You can reach her at apeters at fastcompany dot com.


jueves, 22 de octubre de 2015

Feature: The bizarre reactor that might save nuclear fusion


If you’ve heard of fusion energy, you’ve probably heard of tokamaks. These doughnut-shaped devices are meant to cage ionized gases called plasmas in magnetic fields while heating them to the outlandish temperatures needed for hydrogen nuclei to fuse. Tokamaks are the workhorses of fusion—solid, symmetrical, and relatively straightforward to engineer—but progress with them has been plodding.

Now, tokamaks’ rebellious cousin is stepping out of the shadows. In a gleaming research lab in Germany’s northeastern corner, researchers are preparing to switch on a fusion device called a stellarator, the largest ever built. The €1 billion machine, known as Wendelstein 7-X (W7-X), appears now as a 16-meter-wide ring of gleaming metal bristling with devices of all shapes and sizes, innumerable cables trailing off to unknown destinations, and technicians tinkering with it here and there. It looks a bit like Han Solo’s Millennium Falcon, towed in for repairs after a run-in with the Imperial fleet. Inside are 50 6-tonne magnet coils, strangely twisted as if trampled by an angry giant.

Although stellarators are similar in principle to tokamaks, they have long been dark horses in fusion energy research because tokamaks are better at keeping gas trapped and holding on to the heat needed to keep reactions ticking along. But the Dali-esque devices have many attributes that could make them much better prospects for a commercial fusion power plant: Once started, stellarators naturally purr along in a steady state, and they don’t spawn the potentially metal-bending magnetic disruptions that plague tokamaks. Unfortunately, they are devilishly hard to build, making them perhaps even more prone to cost overruns and delays than other fusion projects. “No one imagined what it means” to build one, says Thomas Klinger, leader of the German effort.



W7-X could mark a turning point. The machine, housed at a branch of the Max Planck Institute for Plasma Physics (IPP) that Klinger directs, is awaiting regulatory approval for a startup in November. It is the first large-scale example of a new breed of supercomputer-designed stellarators that have had most of their containment problems computed out. If W7-X matches or beats the performance of a similarly sized tokamak, fusion researchers may have to reassess the future course of their field. “Tokamak people are waiting to see what happens. There’s an excitement around the world about W7-X,” says engineer David Anderson of the University of Wisconsin (UW), Madison.

Adapted from IPP by C. Bickel and A. Cuadra/Science
Wendelstein 7-X, the first large-scale optimized stellarator, took 1.1 million working hours to assemble, using one of the most complex engineering models ever devised, and must withstand huge temperature ranges and enormous forces.

Stellarators face the same challenge as all fusion devices: They must heat and hold on to a gas at more than 100 million degrees Celsius—seven times the temperature of the sun’s core. Such heat strips electrons from atoms, leaving a plasma of electrons and ions, and it makes the ions travel fast enough to overcome their mutual repulsion and fuse. But it also makes the gas impossible to contain in a normal vessel.

Instead, it is held in a magnetic cage. A current-carrying wire wound around a tube creates a straight magnetic field down the center of the tube that draws the plasma away from the walls. To keep particles from escaping at the ends, many early fusion researchers bent the tube into a doughnut-shaped ring, or torus, creating an endless track.

But the torus shape creates another problem: Because the windings of the wire are closer together inside the hole of the doughnut, the magnetic field is stronger there and weaker toward the doughnut’s outer rim. The imbalance causes particles to drift off course and hit the wall. The solution is to add a twist that forces particles through regions of high and low magnetic fields, so the effects of the two cancel each other out.

Stellarators impose the twist from outside. The first stellarator, invented by astro-physicist Lyman Spitzer at Princeton University in 1951, did it by bending the tube into a figure-eight shape. But the lab he set up—the Princeton Plasma Physics Laboratory (PPPL) in New Jersey—switched to a simpler method for later stellarators: winding more coils of wire around a conventional torus tube like stripes on a candy cane to create a twisting magnetic field inside.

In a tokamak, a design invented in the Soviet Union in the 1950s, the twist comes from within. Tokamaks use a setup like an electrical transformer to induce the electrons and ions to flow around the tube as an electric current. This current produces a vertical looping magnetic field that, when added to the field already running the length of the tube, creates the required spiraling field lines.

Both methods work, but the tokamak is better at holding on to a plasma. In part that’s because a tokamak’s symmetry gives particles smoother paths to follow. In stellarators, Anderson says, “particles see lots of ripples and wiggles” that cause many of them to be lost. As a result, most fusion research since the 1970s has focused on tokamaks—culminating in the huge ITER reactor project in France, a €16 billion international effort to build a tokamak that produces more energy than it consumes, paving the way for commercial power reactors.

But tokamaks have serious drawbacks. A transformer can drive a current in the plasma only in short pulses that would not suit a commercial fusion reactor. Current in the plasma can also falter unexpectedly, resulting in “disruptions”: sudden losses of plasma confinement that can unleash magnetic forces powerful enough to damage the reactor. Such problems plague even up-and-coming designs such as the spherical tokamak (Science, 22 May, p. 854).

Stellarators, however, are immune. Their fields come entirely from external coils, which don’t need to be pulsed, and there is no plasma current to suffer disruptions. Those two factors have kept some teams pursuing the concept.

The largest working stellarator is the Large Helical Device (LHD) in Toki, Japan, which began operating in 1998. Lyman Spitzer would recognize the design, a variation on the classic stellarator with two helical coils to twist the plasma and other coils to add further control. The LHD holds all major records for stellarator performance, shows good steady-state operation, and is approaching the performance of a similarly sized tokamak.

Two researchers—IPP’s Jürgen Nührenberg and Allen Boozer of PPPL (now at Columbia University)—calculated that they could do better with a different design that would confine plasma with a magnetic field of constant strength but changing direction. Such a “quasi-symmetric” field wouldn’t be a perfect particle trap, says IPP theorist Per Helander, “but you can get arbitrarily close and get losses to a satisfactory level.” In principle, it could make a stellarator perform as well as a tokamak.

The design strategy, known as optimization, involves defining the shape of magnetic field that best confines the plasma, then designing a set of magnets to produce the field. That takes considerable computing power, and supercomputers weren’t up to the job until the 1980s.

The first attempt at a partially optimized stellarator, dubbed Wendelstein 7-AS, was built at the IPP branch in Garching near Munich and operated between 1988 and 2002. It broke all stellarator records for machines of its size. Researchers at UW Madison set out to build the first fully optimized device in 1993. The result, a small machine called the Helically Symmetric Experiment (HSX), began operating in 1999. “W7-AS and HSX showed the idea works,” says David Gates, head of stellarator physics at PPPL.

That success gave U.S. researchers confidence to try something bigger. PPPL began building the National Compact Stellarator Experiment (NCSX) in 2004 using an optimization strategy different from IPP’s. But the difficulty of assembling the intricately shaped parts with millimeter accuracy led to cost hikes and schedule slips. In 2008, with 80% of the major components either built or purchased, the Department of Energy pulled the plug on the project (Science, 30 May 2008, p. 1142). “We flat out underestimated the cost and the schedule,” says PPPL’s George “Hutch” Neilson, manager of NCSX.

IPP/Wolfgang Filser. Wendelstein 7-X’s bizarrely shaped components must be put together with millimeter precision. All welding was computer controlled and monitored with laser scanners.
BACK IN GERMANY, the project to build W7-X was well underway. The government of the recently reunified country had given the green light in 1993 and 1994 and decided to establish a new branch institute at Greifswald, in former East Germany, to build the machine. Fifty staff members from IPP moved from Garching to Greifswald, 800 kilometers away, and others made frequent trips between the sites, says Klinger, director of the Greifswald branch. New hires brought staff numbers up to today’s 400. W7-X was scheduled to start up in 2006 at a cost of €550 million.


But just like the ill-fated American NCSX, W7-X soon ran into problems. The machine has 425 tonnes of superconducting magnets and support structure that must be chilled close to absolute zero. Cooling the magnets with liquid helium is “hell on Earth,” Klinger says. “All cold components must work, leaks are not possible, and access is poor” because of the twisted magnets. Among the weirdly shaped magnets, engineers must squeeze more than 250 ports to supply and remove fuel, heat the plasma, and give access for diagnostic instruments. Everything needs extremely complex 3D modeling. “It can only be done on computer,” Klinger says. “You can’t adapt anything on site.

By 2003, W7-X was in trouble. About a third of the magnets produced by industry failed in tests and had to be sent back. The forces acting on the reactor structure turned out to be greater than the team had calculated. “It would have broken apart,” Klinger says. So construction of some major components had to be halted for redesigning. One magnet supplier went bankrupt. The years 2003 to 2007 were a “crisis time,” Klinger says, and the project was “close to cancellation.” But civil servants in the research ministry fought hard for the project; finally, the minister allowed it to go ahead with a cost ceiling of €1.06 billion and first plasma scheduled for 2015.

After 1.1 million construction hours, the Greifswald institute finished the machine in May 2014 and spent the past year carrying out commissioning checks, which W7-X passed without a hitch. Tests with electron beams show that the magnetic field in the still-empty reactor is the right shape. “Everything looks, to an extremely high accuracy, exactly as it should,” IPP’s Thomas Sunn Pedersen says.

Approval to go ahead is expected from Germany’s nuclear regulators by the end of this month. The real test will come once W7-X is full of plasma and researchers finally see how it holds on to heat. The key measure is energy confinement time, the rate at which the plasma loses energy to the environment.The world’s waiting to see if we get the confinement time and then hold it for a long pulse,” PPPL’s Gates says.

Success could mean a course change for fusion. The next step after ITER is a yet-to-be-designed prototype power plant called DEMO. Most experts have assumed it would be some sort of tokamak, but now some are starting to speculate about a stellarator. “People are already talking about it,” Gates says. “It depends how good the results are. If the results are positive, there’ll be a lot of excitement.

Posted in Physics

ORIGINAL: Science
21 October 2015
ADAPTED FROM IPP BY C. BICKEL/ SCIENCE

The Way Electric Eels Kill is Even Cooler Than We Realized


Electric eels are among the most badass predators on planet Earth. How many other creatures can deliver a shock powerful enough to paralyze a horse? But their superpowers are even more impressive than we realized. These eels don’t just use electricity to attack, they use it to see.

That’s the conclusion of a fascinating study published today in Nature Communications. In a series of laboratory experiments, neurobiologist Ken Catania and colleagues show how electric eels “electrolocate” their prey after paralyzing it, using energy fields to locate and swallow hapless victims almost instantly.

The eel can use its electric attack simultaneously as a weapon and a sensory system,” Catania told National Geographic. “It’s sort of a science-fiction-like ability.

Electric eels, which are actually a type of catfish, slink quietly about in the murky depths of the Amazon River, looking for ill-fated creatures on which to discharge their 600-volt weapon. We’ve known of the eel’s formidable hunting ability for decades, but the exact mechanics have proven difficult to study (you try capturing an 8 foot-long living taser and bringing it back to the lab—it ain’t easy).

Catania is more persistent than most. In a study published last year inScience, he showed that electric eels’ high voltage attacks can stimulate their prey’s motor neurons, causing involuntary muscle twitching. Using two or three small electric volleys, the eels will force prey to give away their location before charging up and delivering the paralyzing blow.

Electric eel honing in on an electrically conductive stimulus (red arrow), before initiating its suction-feeding strike. Image Credit: Catania et al. 2015
But how does the eel find its lunch once that prey is disabled? As Catania points out, electric eels will strike and engulf their victims lightning fast — usually within milliseconds.

Electricity figures in here, too, according to a series of laboratory experiments performed by Catania and his colleagues. National Geographic explains:

To understand what was happening, Catania brought electric eels into the lab and presented them with anesthetized fish that were insulated from the eel’s electroreceptors by plastic bags. With an electrode, Catania made the fish flinch, and the eel discharged its high-voltage attack. But then it didn’t seem to know what to do next—the eel lunged in the direction of movement in the water but didn’t attempt to suck the fish into its mouth.

Catania then put an electrically conductive carbon rod into the tank along with the fish. He made the fish flinch, and the eel attacked with a shock. Sometimes the eel started to move in the direction of the fish, but then it changed course to lunge at the rod wherever it had been placed in the tank. To the eel, the fish seemed to be in two places at once.

What these experiments are showing is that electric eels don’t just use voltage to immobilize prey: They follow electric fields, in order to track it. This places the eel in league with sharks, rays, and certain African fish as a predator that can electrolocate—an adaptation that’s similar to echolocation in bats and dolphins.

Me, I’m just grateful this particular hunting ability seems restricted to the water.

[Read the full scientific paper at Nature Communications h/t National Geographic]

martes, 20 de octubre de 2015

Nanocabons Clean Water Without the use of Chemicals and Transform Waste Heat into Electricity


Scientists at INM are working on a desalination method which does not require the addition of chemicals and which is highly energy-efficient. This environmentally friendly process can even be used to generate electricity.

When purifying waste water, chemical reactions are used or it is subjected to elaborate filtering methods to remove salts and heavy metals. Now scientists in Saarbrücken are working on a desalination method which does not require the addition of chemicals and which is highly energy-efficient: in the process known as capacitive de-ionization (CDI), electrodes are used to extract the ions from the water and collect them on the electrodes. The result is clean water and ions which have been enriched on the electrodes. This environmentally friendly process can even be used to generate electricity: emissions such as carbon dioxide are also suited to generating electrical energy when they are dissolved in water as ions.

At an international conference in Saarbrücken, more than 110 experts from 20 countries will be exchanging their knowledge from 26th to 28th October in order to further understand and improve the materials and processes which form the basis of CDI. The conference is being organized on the Saarbrücken Campus by the INM – Leibniz Institute for New Materials.

Nanoporous carbon materials for electrochemical water treatment via capacitive deionization. Copyright: Uwe Bellhäuser /Volker Presser (INM)
The principle of CDI not only serves the purpose of removing unwanted ions from the water. Volker Presser, head of the Energy Materials Group at INM explains, “The generation of electricity can also be the main desired effect, to use emissions from power plants to produce electricity, for example”. For this purpose, the emissions merely has to be present in the water as ions. “Carbon dioxide, for example, is very well suited to this purpose” Professor Presser said and added: “It is particularly exciting for us that, thanks to the electrosorption process, we can even convert waste heat into electricity”. He said this worked because the electrodes are charged at low temperatures and discharged at higher temperatures. As a result of the temperature increase, the electrical voltage increases so that electrical energy can be recovered directly during discharging.

The basic principle of CDI functions without chemical reactions thanks to ion-electrosorption: the water to be purified flows between two electrodes made of porous carbon to which a voltage is applied. The positively charged electrode extracts the ions which have a negative charge from the water and the negatively charged electrode, located opposite, extracts the ions with a positive charge from the water. The ions are stored in the nanopores of the electrode material and, at the end of the process, clean water flows out.

To achieve the highest possible degree of efficiency, it is not sufficient to simply use ‘any’ porous activated carbon as electrode material”, Presser, the young Saarbrücken-based researcher commented. One focus of his work and an important topic of discussion at the conference is therefore the synthesis and characterisation of new carbon-nanomaterials such as graphenes or so-called carbon nano-onions.

The CDI&E International Conference on Capacitive Deionization and Electrosorption will take place from 26th to 28th October in the auditorium of the campus of the Saarland University. The conference is chaired by the German-Israeli team consisting of Professor Volker Presser and Professor Matthew Suss from Technion – the Israel Institute of Technology.

A large number of participants come from Asian and Arab regions since countries where there is a marked scarcity of water are doing intensive research into the technology. CDI is also an attractive option for processing water from mine workings: not only do you get clean water; the ions or precious metals retained as a highly enriched fluid can be a valuable resource in industry.

In ten programme units with supplementary posters, scientists will discuss their results and findings regarding possible materials and mechanisms of electrodes in CDI, energy generation, extraction of ions and ionic processes. Professor emeritus Bertel Kastening, a well-known pioneer of electro-chemistry, has been invited as guest of honour.

15. October 2015

DNA Is Multibillion-Year-Old Software

Illustration by Julia Suits, The New Yorker Cartoonist & author of The Extraordinary Catalog of Peculiar Inventions.

Nature invented software billions of years before we did. “The origin of life is really the origin of software,” says Gregory Chaitin. Life requires what software does (it’s foundationally algorithmic).

1. “DNA is multibillion-year-old software,says Chaitin (inventor of mathematical metabiology). We’re surrounded by software, but couldn’t see it until we had suitable thinking tools.

2. Alan Turing described modern software in 1936, inspiring John Von Neumann to connect software to biology. Before DNA was understood, Von Neumann saw that self-reproducing automata needed software. We now know DNA stores information; it's a biochemical version of Turning’s software tape, but more generally: All that lives must process information. Biology's basic building blocks are processes that make decisions.

3. Casting life as software provides many technomorphic insights (and mis-analogies), but let’s consider just its informational complexity. Do life’s patterns fit the tools of simpler sciences, like physics? How useful are experiments? Algebra? Statistics?

4. The logic of life is more complex than the inanimate sciences need. The deep structure of life’s interactions are algorithmic (loosely algorithms = logic with if-then-else controls). Can physics-friendly algebra capture life’s biochemical computations?

5. Describing its “pernicious influence” on science, Jack Schwartz says, mathematics succeeds in only “the simplest of situations” or when “rare good fortune makes [a] complex situation hinge upon a few dominant simple factors.”

6. Physics has low “causal density — a great Jim Manzi coinage. Nothing in physics chooses. Or changes how it chooses. A few simple factors dominate, operating on properties that generally combine in simple ways. Its parameters are independent. Its algebra-friendly patterns generalize well (its equations suit stable categories and equilibrium states).

7. Higher-causal-density domains mean harder experiments (many hard-to-control factors that often can’t be varied independently). Fields like medicine can partly counter their complexity by randomized trials, but reliable generalization requires biological “uniformity of response.”

8. Social sciences have even higher causal densities, so “generalizing from even properly randomized experiments” is “hazardous,” Manzi says. “Omitted variable bias” in human systems is “massive." Randomization ≠ representativeness of results is guaranteed. 

9. Complexity economist Brian Arthur says science’s pattern-grasping toolbox is becoming “more algorithmic ... and less equation-based. But the nascent algorithmic era hasn’t had its Newton yet.

10. With studies in high-causal-density fields, always consider how representative data is, and ponder if uniform or stable responses are plausible. Human systems are often highly variable; our behaviors aren’t homogenous; they can change types; they’re often not in equilibrium.

11. Bad examples: Malcolm Gladwell puts entertainment first (again) by asserting that “the easiest way to raise people’s scores” is to make a test less readable (n = 40 study, later debunked). Also succumbing to unwarranted extrapolation, leading data-explainer Ezra Klein said, "Cutting-edge research shows that the more information partisans get, the deeper their disagreements.” That study neither represents all kinds of information, nor is a uniform response likely (in fact, assuming that would be ridiculous). Such rash generalizations = far from spotless record. 

Mismatched causal density and thinking tools creates errors. Entire fields are built on assuming such (mismatched) metaphors and methods

Related
olicausal sciences; Newton pattern vs. Darwin pattern; the two kinds of data (history ≠ nomothetic); life = game theoretic = fundamentally algorithmic.

(Hat tip to Bryan Atkins @postgenetic for pointer to Brian Arthur).




ORIGINAL: Big Think
5 MONTHS AGO

lunes, 19 de octubre de 2015

Robotic insect mimics Nature's extreme moves

An international team of Seoul National University and Harvard researchers looked to water strider insects to develop robots that jump off water’s surface

(SEOUL and BOSTON) — The concept of walking on water might sound supernatural, but in fact it is a quite natural phenomenon. Many small living creatures leverage water's surface tension to maneuver themselves around. One of the most complex maneuvers, jumping on water, is achieved by a species of semi-aquatic insects called water striders that not only skim along water's surface but also generate enough upward thrust with their legs to launch themselves airborne from it.


In this video, watch how novel robotic insects developed by a team of Seoul National University and Harvard scientists can jump directly off water's surface. The robots emulate the natural locomotion of water strider insects, which skim on and jump off the surface of water. Credit: Wyss Institute at Harvard University

Now, emulating this natural form of water-based locomotion, an international team of scientists from Seoul National University, Korea (SNU), Harvard’s Wyss Institute for Biologically Inspired Engineering, and the Harvard John A. Paulson School of Engineering and Applied Sciences, has unveiled a novel robotic insect that can jump off of water's surface. In doing so, they have revealed new insights into the natural mechanics that allow water striders to jump from rigid ground or fluid water with the same amount of power and height. The work is reported in the July 31 issue of Science.

"Water's surface needs to be pressed at the right speed for an adequate amount of time, up to a certain depth, in order to achieve jumping," said the study's co–senior author Kyu Jin Cho, Associate Professor in the Department of Mechanical and Aerospace Engineering and Director of the Biorobotics Laboratory at Seoul National University. "The water strider is capable of doing all these things flawlessly."

The water strider, whose legs have slightly curved tips, employs a rotational leg movement to aid it its takeoff from the water’s surface, discovered co–senior author Ho–Young Kim who is Professor in SNU's Department of Mechanical and Aerospace Engineering and Director of SNU's Micro Fluid Mechanics Lab. Kim, a former Wyss Institute Visiting Scholar, worked with the study’s co–first author Eunjin Yang, a graduate researcher at SNU's Micro Fluid Mechanics lab, to collect water striders and take extensive videos of their movements to analyze the mechanics that enable the insects to skim on and jump off water's surface.

It took the team several trial and error attempts to fully understand the mechanics of the water strider, using robotic prototypes to test and shape their hypotheses.

"If you apply as much force as quickly as possible on water, the limbs will break through the surface and you won’t get anywhere," said Robert Wood, Ph.D., who is a co–author on the study, a Wyss Institute Core Faculty member, the Charles River Professor of Engineering and Applied Sciences at the Harvard Paulson School, and founder of the Harvard Microrobotics Lab.

But by studying water striders in comparison to iterative prototypes of their robotic insect, the SNU and Harvard team discovered that the best way to jump off of water is to maintain leg contact on the water for as long as possible during the jump motion.

"Using its legs to push down on water, the natural water strider exerts the maximum amount of force just below the threshold that would break the water’s surface," said the study's co-first author Je-Sung Koh, Ph.D., who was pursuing his doctoral degree at SNU during the majority of this research and is now a Postdoctoral Fellow at the Wyss Institute and the Harvard Paulson School.

Mimicking these mechanics, the robotic insect built by the team can exert up to 16 times its own body weight on the water's surface without breaking through, and can do so without complicated controls. Many natural organisms such as the water strider can perform extreme styles of locomotion – such as flying, floating, swimming, or jumping on water – with great ease despite a lack of complex cognitive skills.

From left, Seoul National University (SNU) professors Ho-Young Kim, Ph.D., and Kyu Jin Cho, Ph.D., observe the semi-aquatic jumping robotic insects developed by an SNU and Harvard team. Credit: Seoul National University.
"This is due to their natural morphology," said Cho. "It is a form of embodied or physical intelligence, and we can learn from this kind of physical intelligence to build robots that are similarly capable of performing extreme maneuvers without highly–complex controls or artificial intelligence."

The robotic insect was built using a "torque reversal catapult mechanism" inspired by the way a flea jumps, which allows this kind of extreme locomotion without intelligent control. It was first reported by Cho, Wood and Koh in 2013 in the International Conference on Intelligent Robots and Systems.

For the robotic insect to jump off water, the lightweight catapult mechanism uses a burst of momentum coupled with limited thrust to propel the robot off the water without breaking the water's surface. An automatic triggering mechanism, built from composite materials and actuators, was employed to activate the catapult.

To produce the body of the robotic insect, "pop-up" manufacturing was used to create folded composite structures that self-assemble much like the foldable components that "pop–up" in 3D books. Devised by engineers at the Harvard Paulson School and the Wyss Institute, this ingenious layering and folding process enables the rapid fabrication of microrobots and a broad range of electromechanical devices.

"The resulting robotic insects can achieve the same momentum and height that could be generated during a rapid jump on firm ground – but instead can do so on water – by spreading out the jumping thrust over a longer amount of time and in sustaining prolonged contact with the water's surface," said Wood.

"This international collaboration of biologists and roboticists has not only looked into nature to develop a novel, semi–aquatic bioinspired robot that performs a new extreme form of robotic locomotion, but has also provided us with new insights on the natural mechanics at play in water striders," said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D.

Additional co–authors of the study include Gwang–Pil Jung, a Ph.D. candidate in SNU's Biorobotics Laboratory; Sun–Pill Jung, an M.S. candidate in SNU's Biorobotics Laboratory; Jae Hak Son, who earned his Ph.D. in SNU's Laboratory of Behavioral Ecology and Evolution; Sang–Im Lee, Ph.D., who is Research Associate Professor at SNU's Institute of Advanced Machines and Design and Adjunct Research Professor at the SNU's Laboratory of Behavioral Ecology and Evolution; and Piotr Jablonski, Ph.D., who is Professor in SNU's Laboratory of Behavioral Ecology and Evolution.

This work was supported by the National Research Foundation of Korea, Bio–Mimetic Robot Research Center funding from the Defense Acquisition Program Administration, and the Wyss Institute for Biologically Inspired Engineering at Harvard University.

IMAGE AND VIDEO AVAILABLE

###

PRESS CONTACTS
Seoul National University College of Engineering
Kyu Jin Cho, kjcho@snu.ac.kr, +82 10-5616-1703

Wyss Institute for Biologically Inspired Engineering at Harvard University
Kat J. McAlpine, katherine.mcalpine@wyss.harvard.edu, +1 617-432-8266

Harvard University John A. Paulson School of Engineering and Applies Sciences
Leah Burrows, lburrows@seas.harvard.edu, +1 617-496-1351

The Seoul National University College of Engineering (SNU CE) (http://eng.snu.ac.kr/english/index.php) aims to foster leaders in global industry and society. In CE, professors from all over the world are applying their passion for education and research. Graduates of the college are taking on important roles in society as the CEOs of conglomerates, founders of venture businesses, and prominent engineers, contributing to the country's industrial development. Globalization is the trend of a new era, and engineering in particular is a field of boundless competition and cooperation. The role of engineers is crucial to our 21st century knowledge and information society, and engineers contribute to the continuous development of Korea toward a central role on the world stage. CE, which provides enhanced curricula in a variety of major fields, has now become the environment in which future global leaders are cultivated.

The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature's design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing that are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and formation of new start–ups. The Wyss Institute creates transformative technological breakthroughs by engaging in high risk research, and crosses disciplinary and institutional barriers, working as an alliance that includes Harvard's Schools of Medicine, Engineering, Arts & Sciences and Design, and in partnership with Beth Israel Deaconess Medical Center, Brigham and Women's Hospital, Boston Children's Hospital, Dana–Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Boston University, Tufts University, and Charité – Universitätsmedizin Berlin, University of Zurich and Massachusetts Institute of Technology.

The Harvard University John A. Paulson School of Engineering and Applied Sciences (http://seas.harvard.edu) serves as the connector and integrator of Harvard's teaching and research efforts in engineering, applied sciences, and technology. Through collaboration with researchers from all parts of Harvard, other universities, and corporate and foundational partners, we bring discovery and innovation directly to bear on improving human life and society.

ORIGINAL: Wyss Institute
Jul 30, 2015

domingo, 18 de octubre de 2015

Stanford engineers create artificial skin that can send pressure sensation to brain cell

Stanford engineers have created a plastic skin-like material that can detect pressure and deliver a Morse code-like signal directly to a living brain cell. The work takes a big step toward adding a sense of touch to prosthetic limbs.

Stanford chemical engineering Professor Zhenan Bao and her team have created a skin-like material that can tell the difference between a soft touch and a firm handshake. The device on the golden “fingertip” is the skin-like sensor developed by Stanford engineers. (Bao Lab)


Stanford engineers have created a plastic "skin" that can detect how hard it is being pressed and generate an electric signal to deliver this sensory input directly to a living brain cell.

Zhenan Bao, a professor of chemical engineering at Stanford, has spent a decade trying to develop a material that mimics skin's ability to flex and heal, while also serving as the sensor net that sends touch, temperature and pain signals to the brain. Ultimately she wants to create a flexible electronic fabric embedded with sensors that could cover a prosthetic limb and replicate some of skin's sensory functions.

Bao's work, reported today in Science, takes another step toward her goal by replicating one aspect of touch, the sensory mechanism that enables us to distinguish the pressure difference between a limp handshake and a firm grip.

"This is the first time a flexible, skin-like material has been able to detect pressure and also transmit a signal to a component of the nervous system," said Bao, who led the 17-person research team responsible for the achievement.

Benjamin Tee, a recent doctoral graduate in electrical engineering; Alex Chortos, a doctoral candidate in materials science and engineering; and Andre Berndt, a postdoctoral scholar in bioengineering, were the lead authors on the Science paper.

DIGITIZING TOUCH
The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao's team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic's molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information to the brain as short pulses of electricity, similar to Morse code. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

IMPORTING THE SIGNAL
Bao's team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao's team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao's team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

"We have a lot of work to take this from experimental to practical applications," Bao said. "But after spending many years in this work, I now see a clear path where we can take our artificial skin."

ORIGINAL: Stanford
By Tom Abate

viernes, 16 de octubre de 2015

Automating big-data analysis. System that replaces human intuition with algorithms outperforms 615 of 906 human teams.


COMMENT
Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.

Between the lines
Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk for dropping out of online courses.

What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITx doesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition
Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses.

  • One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature constructionFor instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features
    • total cost per order, 
    • average cost per order, 
    • minimum cost per order, and 
    • so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.
  • It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories. Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.
The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.

ORIGINAL: MIT News
Larry Hardesty | MIT News Office 
October 16, 2015