domingo, 31 de marzo de 2013

First Love Child of Human, Neanderthal Found

ORIGINAL: Discovery
MAR 27, 2013



The skeletal remains of an individual living in northern Italy 40,000-30,000 years ago are believed to be that of a human/Neanderthal hybrid, according to a paper in PLoS ONE.

If further analysis proves the theory correct, the remains belonged to the first known such hybrid, providing direct evidence that humans and Neanderthals interbred. Prior genetic research determined the DNA of people with European and Asian ancestry is 1 to 4 percent Neanderthal.

The present study focuses on the individual’s jaw, which was unearthed at a rock-shelter called Riparo di Mezzena in the Monti Lessini region of Italy. Both Neanderthals and modern humans inhabited Europe at the time.

From the morphology of the lower jaw, the face of the Mezzena individual would have looked somehow intermediate between classic Neanderthals, who had a rather receding lower jaw (no chin), and the modern humans, who present a projecting lower jaw with a strongly developed chin,” co-author Silvana Condemi, an anthropologist, told Discovery News.

Condemi is the CNRS research director at the University of Ai-Marseille. She and her colleagues studied the remains via DNA analysis and 3D imaging. They then compared those results with the same features from Homo sapiens.

The genetic analysis shows that the individual’s mitochondrial DNA is Neanderthal. Since this DNA is transmitted from a mother to her child, the researchers conclude that it was a “female Neanderthal who mated with male Homo sapiens.”

By the time modern humans arrived in the area, the Neanderthals had already established their own culture, Mousterian, which lasted some 200,000 years. Numerous flint tools, such as axes and spear points, have been associated with the Mousterian. The artifacts are typically found in rock shelters, such as the Riparo di Mezzena, and caves throughout Europe.

The researchers found that, although the hybridization between the two hominid species likely took place, the Neanderthals continued to uphold their own cultural traditions.

That's an intriguing clue, because it suggests that the two populations did not simply meet, mate and merge into a single group.


As Condemi and her colleagues wrote, the mandible supports the theory of "a slow process of replacement of Neanderthals by the invading modern human populations, as well as additional evidence of the upholding of the Neanderthals' cultural identity.

Prior fossil finds indicate that modern humans were living in a southern Italy cave as early as 45,000 years ago. Modern humans and Neanderthals therefore lived in roughly the same regions for thousands of years, but the new human arrivals, from the Neanderthal perspective, might not have been welcome, and for good reason. The research team hints that the modern humans may have raped female Neanderthals, bringing to mind modern cases of "ethnic cleansing."

Ian Tattersall is one of the world’s leading experts on Neanderthals and the human fossil record. He is a paleoanthropologist and a curator emeritus at the American Museum of Natural History.

Tattersall told Discovery News that the hypothesis, presented in the new paper, “is very intriguing and one that invites more research.

Neanderthal culture and purebred Neanderthals all died out 35,000-30,000 years ago.

Faces of Our Ancestors
Back In The Begining
Back in the Beginning. To put a human face on our ancestors, scientists from the Senckenberg Research Institute used sophisticated methods to form 27 model heads based on tiny bone fragments, teeth and skulls collected from across the globe. The heads are on display for the first time together at the Senckenberg Natural History Museum in Frankfurt, Germany. This model is Sahelanthropus tchadensis, also nicknamed "Toumai," who lived 6.8 million years ago. Parts of its jaw bone and teeth were found nine years ago in the Djurab desert in Chad. It's one of the oldest hominid specimens ever found. WASHINGTON STATE UNIVERSITY; SVEN TRAENKNER

Australopithecus Afarensis
Australopithecus afarensis. With each new discovery, paleoanthropologists have to rewrite the origins of man's ancestors, adding on new branches and tracking when species split. This model was fashioned from pieces of a skull and jaw found among the remains of 17 pre-humans (nine adults, three adolescents and five children) which were discovered in the Afar Region of Ethiopia in 1975. The ape-man species, Australopithecus afarensis, is believed to have lived 3.2 million years ago. Several more bones from this species have been found in Ethiopia, including the famed "Lucy," a nearly complete A. afarensis skeleton found in Hadar. MINNESOTA STATE UNIVERSITY; SVEN TRAENKNER

Australopithecus Africanus
Australopithecus africanus. Meet "Mrs. Ples," the popular nickname for the most complete skull of an Australopithecus africanus, unearthed in Sterkfontein, South Africa in 1947. It is believed she lived 2.5 million years ago (although the sex of the fossil is not entirely certain). Crystals found on her skull suggest that she died after falling into a chalk pit, which was later filled with sediment. A. africanus has long puzzled scientists because of its massive jaws and teeth, but they now believe the species' skull design was optimal for cracking nuts and seeds. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCE

Paranthropus Aethiopicus
Paranthropus aethiopicus. The skull of this male adult was found on the western shore of Lake Turkana in Kenya in 1985. The shape of the mouth indicates that he had a strong bite and could chew plants. He is believed to have lived in 2.5 million years ago and is classified as Paranthropus aethiopicus. Much is still unknown about this species because so few reamins of P. aethiopicus have been found. SMITHSONIAN MUSEUM; SVEN TRAENKNER (C), "SAFA

Paranthropus Boisei
Paranthropus boisei. Researchers shaped this skull of "Zinj," found in 1959. The adult male lived 1.8 million years ago in the Olduvai Gorge of Tanzania. His scientific name is Paranthropus boisei, though he was originally called Zinjanthropus boisei -- hence the nickname. First discovered by anthropologist Mary Leakey, the well-preserved cranium has a small brain cavity. He would have eaten seeds, plants and roots which he probably dug with sticks or bones. SVEN TRAENKNER (C), "SAFARI ZUM URMENSCHEN"

Homo Rudolfensis
Homo rudolfensis. This model of a sub-human species -- Homo rudolfensis -- was made from bone fragments found in Koobi Fora, Kenya, in 1972. The adult male is believed to have lived about 1.8 million years ago. He used stone tools and ate meat and plants. H. Rudolfensis' distinctive features include a flatter, broader face and broader postcanine teeth, with more complex crowns and roots. He is also recognized as having a larger cranium than his contemporaries. MINNESOTA STATE UNIVERSITY; SVEN TRAENKNER

Homo Ergaster
Homo ergaster. The almost perfectly preserved skeleton of the "Turkana Boy" is one of the most spectacular discoveries in paleoanthropology. Judging from his anatomy, scientists believe this Homo ergaster was a tall youth about 13 to 15 years old. According to research, the boy died beside a shallow river delta, where he was covered by alluvial sediments. Comparing the shape of the skull and teeth, H. ergaster had a similiar head structure to the Asian Homo erectus. SVEN TRAENKNER (C), "SAFARI ZUM URMENSCHEN"

Homo Heidelbergensis
Homo heidelbergensis. This adult male, Homo heidelbergensis, was discovered in in Sima de los Huesos, Spain in 1993. Judging by the skull and cranium, scientists believe he probably died from a massive infection that caused a facial deformation. The model, shown here, does not include the deformity. This species is believed to be an ancestor of Neanderthals, as seen in the shape of his face. "Miquelon," the nickname of "Atapuerca 5", lived about 500,000 to 350,000 years ago and fossils of this species have been found in Italy, France and Greece. SVEN TRAENKNER (C), "SAFARI ZUM URMENSCHEN"

Homo Neanderthalensis
Homo neanderthalensis. The "Old Man of La Chapelle" was recreated from the skull and jaw of a Homo neanderthalensis male found near La Chapelle-aux-Saints, in France in 1908. He lived 56,000 years ago. His relatively old age, thought to be between 40 to 50 years old, indicates he was well looked after by a clan. The old man's skeleton indicates he suffered from a number of afflictions, including arthritis, and had numerous broken bones. Scientists at first did not realize the age and afflicted state of this specimen when he was first discovered. This led them to incorrectly theorize that male Neanderthals were hunched over when they walked. SVEN TRAENKNER (C), "SAFARI ZUM URMENSCHEN" 

Homo Floresiensis
Homo Floresiensis. The skull and jaw of this female "hobbit" was found in Liang Bua, Flores, Indonesia, in 2003. She was about 1 meter tall (about 3'3") and lived about 18,000 years ago. The discovery of her species, Homo floresiensis, brought into question the belief that Homo sapiens was the only form of mankind for the past 30,000 years. Scientists are still debating whether Homo floresiensis was its own species, or merely a group of diseased modern humans. Evidence is mounting that these small beings were, in fact, a distinct human species. SVEN TRAENKNER (C), "SAFARI ZUM URMENSCHEN"

Homo sapiens
Homo sapiens. Bones can only tell us so much. Experts often assume or make educated guesses to fill in the gaps in mankind's family tree, and to develop a sense what our ancestors may have looked like. Judging from skull and mandible fragments found in a cave in Israel in 1969, this young female Homo sapien lived between 100,000 and 90,000 years ago. Her bones indicate she was about 20 years old. Her shattered skull was found among the remains of 20 others in a shallow grave. SVEN TRAENKNER (C), "SAFARI ZUM URMENSCHEN" 


This Week in Science 23-31 March

ORIGINAL: IFLS


Cancer genes: http://bit.ly/XkbFug
Magnetic charges of matter & antimatter:http://bit.ly/XBZbtN
Seven sex mating system: http://bit.ly/15ZOuVp
Down's Syndrome: http://bit.ly/105Cznm
Gene therapy: http://bit.ly/160HEz6
Neanderthal hybrid: http://bit.ly/YJxbVl

Artificial Intelligence (AI) Means More Than Just Neat Gadgets - It Could Mean a Greener Future for Everyone

ORIGINAL: Huffungton Post
28/03/2013

(*) Reader in Agents, Interaction and Complexity Research Group in the School of Electronics and Computer Science at the University of Southampton 

Despite a number of false dawns, artificial intelligence (AI) is finally beginning to deliver on some of its early promise. When created as an academic endeavor in the nineteen fifties, predictions of thinking machines with human level intelligence within the next twenty years were common. Unfortunately, it's taken a lot longer than that, and such human level intelligence is still some way off, if achievable at all. 

However, the growth of computer power over the same period has meant that the devices and services that we use every day are now routinely tackling many of the early challenges of AI. Our cameras and smart phones locate faces in images and focus accordingly. Social networking websites recognize our friends in these same images and automatically tag them. Our cars can already park themselves and automatically brake when they recognize pedestrians stepping into the road, and prototype fully autonomous self-driving cars are clocking up miles on real roads. Meanwhile, closer to that original vision of human level intelligence, last year Watson, a supercomputer built by IBM, famously beat human players on the quiz show Jeopardy!, and is now being re-tasked to train medical doctors. Meanwhile Google's founder, Larry Page, has argued that the company will eventually become an artificial intelligence, and has recruited many of the world's top AI scientists to work toward this goal. 

Despite this progress, one area where there has been very little deployment of AI technologies is in our homes. Like AI itself, the vision of a smart home, full of autonomous devices that learn what we want and automatically take care of household chores has a long history, and still seems to many like science fiction. Unlike our gadgets and cars, which tend to be replaced relatively frequently, our homes are updated only rarely, and smart home appliances, such as autonomous vacuum cleaners, are largely seen as novelties rather than necessities. 

However, new concerns about ever rising energy costs, and the impact of carbon emissions from domestic energy use on climate change, are beginning to change this. Last year, Nest Labs, founded by the creator of the iPod, launched the Nest Learning Thermostat, which combines iPod-like cool design with smart algorithms that learn the householders' preferences, and automatically recognize when the home is unoccupied, adjusting the heating or cooling system appropriately

At the University of Southampton, we're trying to make the same sort of analysis available to people who aren't yet willing or able to go out and spend money on new gadgets. The Electronics and Computer Science department where I work, one of the world's largest and most successful electronics and computer science departments, can uniquely tackle both the hardware and the AI challenges in this space. Our system, named MyJoulo, aims to reduce your heating bill by providing personalised advice on your home's energy consumption. It uses a small, simple to use, Joulo logger which measures the temperature at the thermostat every two minutes for a week. When this data is uploaded to the MyJoulo website, smart AI algorithms are used to build a mathematical model of how the home responds to heat and how the home's occupants are using the heating system. Using additional external temperature data from the internet, we can then calculate the energy savings that will be achieved by turning down the thermostat or changing timer settings, and can spot homes which leak heat most rapidly. By building a representative map of the UK building stock, the MyJoulo project will ultimately be able to identify particularly leaky homes, and prioritise interventions such as the installation of better insulation. 

Such smart technologies applied within the home are likely to be essential if the UK is to reach its ambitious target of reducing carbon emissions by 80% by 2050. MyJoulo represents a first step along this path, potentially reducing the millions of Kilowatt hours of energy wasted per year, and saving households money at the same time. 

300,000 mirrors: World's largest thermal solar plant (377MW) under construction in the Mojave

ORIGINAL: TreeHugger
March 27, 2013

credit: Brightsource
The largest concentrating solar power plant (100 MW) in operation is currently in Abu Dhabi, but it won't stay at the top of the list for too long. Brightsource Energy is putting the finishing touches on its massive Ivanpah concentrating solar power (CSP) plant in the Mojave desert, and if all goes well, the switch should be flipped this year.


credit: Brightsource
Ivanpah will have a capacity of 377 megawatts, or about enough energy to power 140,000 houses. It took more than 5 years to plan it, get permits, finance it, and build it. The shot above shows an early phase of construction.

credit: Brightsource
Here are some mirrors being brought to the site to be installed. At Ivanpah alone, over 300,000 software-controlled mirrors will track the sun and reflect the sunlight to boilers that sit atop three 459 foot tall towers. This heat is then turned into steam that goes through turbines to generate electricity.

credit: Brightsource
Brightsource says that the project has created 2,100 jobs for construction workers and support staff and will have generated about $650 million in employee wages and earnings. Of course, most of this is during the construction phase. Once the CSP plant is up and running, it'll only take less than 100 people to maintain it... But the construction workers can then move on to building another one, or maybe a wind farm.

credit: Brightsource
Here's a shot that shows just how many mirrors are used. It's really amazing how big this is, and how much solar energy will be concentrated into that (relatively) small tower at the center.

credit: Brightsource
This aerial shot shows one of the towers well.

credit: Brightsource
The second tower, with some mirrors still left to be installed.

credit: Brightsource
Here we can see how concentrating solar power works. Step #4 is particularly important; it's possible to store heat and generate electricity when the sun isn't shinning. Particularly useful since peak use time extends into the evening.

Ivanpah will not use thermal storage, but future Brightsource projects probably will.

credit: Brightsource
Here you can see where the three towers are in relation to each other.

credit: Brightsource
A different view of the solar power plant.

credit: Brightsource
A closeup of the central tower. Notice how small the cars on the ground seem compared to it. The scale of these things is huge.

credit: Brightsource
This is a computer-generated rendering by Brightsource that shows what the final product will look like when in operation. You wouldn't want to be an ant climbing the side of that central tower...

credit: Brightsource
Here's another rendering.

Bio-Nanowires Conduct Electricity

ORIGINAL: Raijini Rao


Imagine a conducting nanowire, only 3-5 nm wide but many thousand times longer, connecting a microbial community to form mini-power grids. Naturally occurring soil bacteria, such as Geobacter, use these conductivepili for long-range electron transport. How and why do they do this?

▶ All living organisms respire. Our cells break down sugars to obtain energy by extracting electrons that are handed down a relay chain to oxygen, which becomes water. The proteins (cytochromes) that conduct electrons are aided by special metallic centers, studded with iron, so they can cycle between Fe2+ (ferrous) and Fe3+ (ferric) states that differ by one electron. Geobacter uses these cytochromes too, just as our cells do. But oxygen only made its debut a mere 2.4 billion years ago. Before that, ancient bacteria shuttled the electrons to other acceptors, such as sulfides, nitrates and Fe3+. When Geobacter is deprived of oxygen, it grows out long pili into mineral rich rocks and "breathes" iron (drawing on top right). The current is believed to pass between layers of bacteria (middle right image) across a distance of 12 millimetres, which may not seem large, but is 10,000 body lengths to bacteria!

But can proteins conduct current? Researchers knew that the pili were conductive, behaving like ohmic devices (image at bottom right). Although the pili were decorated with cytochromes, they were spaced too far apart to transfer electrons between their metallic centers. When the protein chains were mutated to replace a type of amino acid, the pili lost conductivity. These "aromatic" amino acids have pi-pi orbitals that may be conducting electrons.

Live Wires: Bacterial nanowires can be used in generating microbial energy cells, bioremediation of pollutants (like uranium), and in nano-manufacturing of a variety of devices. The main image shows bacteria growing on metal electrodes.


Dolphin discovery-Bubble Rings!

ORIGINAL: Ryan Burke

Dolphins and a human producing unbelievable bubble rings that defy explanation - simply amazing must see vid

Abel Prize 2013 goes to Pierre Deligne, and Milner Prize to Alexandre Polyakov

ORIGINAL: viXra

Pierre Deligne
The Abel prize in mathematics for 2013 has been awarded to Pierre Deligne for his work on algebraic geometry which has been applied to number theory and representation theory. This is research that is at the heart of some of the most exciting mathematics of our time with deep implications that could extend out from pure mathematics to physics. 

Deligne is from Belgium and works at IAS Princeton. 

I obviously can’t beat the commentary from Tim Gowers who once again spoke at the announcement about what the achievement means, so see his blog if you are interested in what it is all about. 

Update: Also today the fundamental Physics Milner Prize went to Polyakov, another worthy choice. 

Update: Some bloggers such as Strassler and Woit seem uncertain this morning about whether Polyakov got the prize. He did. They played a strange trick on the audience watching the live webcast from CERN by running a 20 minute film just before the final award. They did not have broadcast rights for the film so they had to stop the webcast. After that the webcast resumed but you had to refresh your browser at the right moment to get it back. The final award to Polyakov was immediately after the film so many people would have missed it. I saw most of it and can confirm that Polyakov was the only one who finished the night with two balls (so to speak). To make matters worse there does not seem to have been a press announcement yet so it is not being reported in mainstream news, but that will surely change this morning. As bloggers we are grateful to Milner for this chance to be ahead of the MSM again. 

I would have done a screen grab to get a picture of Polyakov but CERN have recently changed their copyright terms so that we cannot show images from CERN without satisfying certain conditions. This contrasts sharply with US government rules which ensure that any images or video taken from US research organisations are public domain without conditions.

Artificial muscle computer performs as a universal Turing machine

ORIGINAL: Physorg
by Lisa Zyga

An illustration of Wolfram’s “2, 3” Turing machine, the simplest known universal Turing machine that can solve any computable problem. A machine head reads the tape, decides what to do based on the data it sees plus its internal state (1 or 0), and then write the data and moves a step left or right. The researchers here realized this Turing machine using artificial muscles to help perform logic functions and memory functions. Credit: O’Brien and Anderson. ©2013 American Institute of Physics

(Phys.org) —In 1936, Alan Turing showed that all computers are simply manifestations of an underlying logical architecture, no matter what materials they're made of. Although most of the computer's we're familiar with are made of silicon semiconductors, other computers have been made of DNA, light, legos, paper, and many other unconventional materials. 

Now in a new study, scientists have built a computer made of artificial muscles that are themselves made of electroactive polymers. The artificial muscle computer is an example of the simplest known universal Turing machine, and as such it is capable of solving any computable problem given sufficient time and memory. By showing that artificial muscles can "think," the study paves the way for the development of smart, lifelike prostheses and soft robots that can conform to changing environments.

The authors, Benjamin Marc O'Brien and Iain Alexander Anderson at the University of Auckland in New Zealand, have published their study on the artificial muscle computer in a recent issue of Applied Physics Letters.

"To the best of our knowledge, this is the first time a computer has been built out of artificial muscles," O'Brien told Phys.org. "What makes it exciting is that the technology can be directly and intimately embedded into artificial muscle devices, giving them lifelike reflexes. Even though our computer has hard bits, the technology is fundamentally soft and stretchy, something that traditional methods of computation struggle with."
Video of the artificial muscle computer at work. Credit: O’Brien and Anderson. ©2013 American Institute of Physics

The artificial muscle computer is modeled on Stephen Wolfram's "2, 3" Turing machine architecture, which is the simplest known universal Turing machine. It consists of a machine head that reads symbols stored on a tape, and then based on the symbols and its own state (0 or 1), it follows a set of instructions that tells it what to write and store. The 2, 3 Turing machine is ideal to build with artificial muscles because of its simplicity. The researchers could theoretically solve any computational problem using just 13 muscles.

By expanding and contracting, the artificial muscles performed a variety of mechanisms involved in the computing process. For example, the muscles pushed sliding elements into position, and the sliding elements were used to encode data. Artificial muscles were also used to make the instruction set that the machine head uses to make decisions. In this case, when a muscle expands, it compresses a switch, causing it to conduct charge. 

In its current version, the artificial muscle computer is very large (about 1 m3) and extremely slow (0.15 Hz). However, the researchers demonstrated that it could evolve the correct sequence of calculations in response to a test input, and they predict that the computer's performance could be significantly improved. In the future, the researchers also want to investigate whether this type of computer would perform better using an analog rather than digital architecture. 

(Left) The artificial muscle computer. (Right) Sample steps for a sequence of calculations performed by the computer. Credit: O’Brien and Anderson. ©2013 American Institute of Physics
Overall, the demonstration that artificial muscles can be made to compute and "think" has implications for future prosthetics and soft robots. By sensing, computing, and moving, artificial muscles could give these devices the ability to conform to complex and uncertain environments, as well as give them reflexes like the real muscles seen in nature.

"If you look at life you see these amazing capabilities and structures," O'Brien said. "The octopus, for example, has extremely dexterous infinite-degree-of-freedom manipulators. Such manipulators would be great for our own robots, but there is the huge challenge of how to control them—the degrees of freedom can overwhelm a central controller. Octopuses solve this by distributing neurons throughout their arms. With artificial muscle logic, we might one day be able to do the same."

The researchers plan to take several steps in order to reach these goals.

"In the future we would like to miniaturize the technology to make it go faster and become more portable; develop materials that last longer before failing; make the computer entirely soft; explore analogue architectures; and build a soft robotic manipulator with a built-in computer," O'Brien said.

The researchers have also recently formed a company called Stretch Sense that makes soft wireless stretch sensors using artificial muscle technology. In the future, they hope to commercialize their artificial muscle computing as well.

More information: Benjamin Marc O'Brien and Iain Alexander Anderson. "An artificial muscle computer." Applied Physics Letters 102, 104102 (2013). DOI: 10.1063/1.4793648

Journal reference: Applied Physics Letters

Copyright 2013 Phys.org 
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of Phys.org.

sábado, 30 de marzo de 2013

Is Brain Mapping Ready for Big Science?

ORIGINAL: GEN


The BAM project will be an expensive undertaking. Will it be worth the cost?

The Brain Activity Map Project is aimed at reconstructing the full record of neural activity across complete neural circuits, with the goal of understanding fundamental and pathological brain processes. [V. Yakobchuk/Fotolia.com]
President Barack Obama’s public-private initiative to create an activity map of the human brain will cost more than $3 billion, projections say, or $300 million annually for 10 years. The project has multiple private and public institutions lined up to participate, including the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation. All parties hope that the initiative will move brain science forward with the same kind of money and focused effort that drove the Genome Project.

Every dollar we invested to map the human genome returned $140 to our economy—every dollar,” the president commented. “Today our scientists are mapping the human brain to unlock the answers to Alzheimer’s. They’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation.

George M. Church, Ph.D., professor of genetics at Harvard Medical School and director of PersonalGenomes.org, said he was helping to plan the Brain Activity Map project.

If you look at the total spending in neuroscience and nanoscience that might be relative to this today, we are already spending more than that. We probably won’t spend less money, but we will probably get a lot more bang for the buck,” he commented in the New York Times.

BAM
The proposal for the project came from six scientists, among them Dr. Church, who said in the journal Neuron, “We propose launching a large-scale, international public effort, the Brain Activity Map project (BAM), aimed at reconstructing the full record of neural activity across complete neural circuits. This technological challenge could prove to be an invaluable step toward understanding fundamental and pathological brain processes.

The collective idea for the initiative was generated at a meeting of neuroscientists and nanoscientists convened in September 2011 at the Kavli Royal Society International, U.K., organized by Tom Kalil, deputy director for policy at the White House’s Office of Science and Technology Policy (OSTP), and Miyoung Chun, Ph.D., vice president of science programs at the Kavli Foundation in Oxnard, California.

The Kavli institute has founded institutes for brain science at UC San Diego, Yale, and the Norwegian University of Science and Technology.

Meeting attendees articulated the issues the BAM will address in its report, mentioning “our persistent ignorance of the brain’s micro-circuitry—the minute and multitudinous connections contained within,” and citing the great brain scientist Ramon y Cajal’s 1923 quote that refers to the interconnected, intermixed, and dynamical network of different cell types as “impenetrable jungles where many investigators have lost themselves.” “Another equally fundamental shortcoming,” they noted, “is our inability to monitor network interactions and coordinated brain activities densely, and to do so simultaneously across extended regions of the brain, and with sufficient temporal and spatial resolution.

And most scientists, whether proponents or opponents of the big science approach to brain mapping, agree that its biggest challenge is the need to develop novel tools to study the brain.

Revolutionary New Tools Needed
Partha Mitra, Ph.D., a theoretical physicist and currently Crick-Clay professor of biomathematics at Cold Spring Harbor Laboratory, says that current methods to visualize living or dead brains provide only glimpses of small portions of the full spatial extent of neurons in the human brain, or pictures of thin sections of brain, with pieces of the neurons in them. “No one has yet seen, under the microscope or in digital reconstruction, a complete human brain neuron that sends projections to distant parts of the brain. To do that at the whole-brain scale would be like seeing a new continent or planet.Dr. Mitra’s research currently combines experimental, theoretical, and informatics approaches to gain an understanding of how brains work.

Dr. Chun has been developing the project since the beginning and has described herself as the “glue” holding the diverse stakeholders together. She told Nature that “there’s clearly an issue with tool development—and not just amending current, existing tools, although that will be important in the initial stages. In the long run, one of the very important points would be to come up with revolutionary new tools that will measure brain activity in a completely different way than what we know now.

And project proponents say the only way to tackle some thus far tricky intractable human diseases, like Alzheimer’s and Parkinson’s disease, is with a huge program. “We are right on the edge of finding out really vital information about the brain,” says Brown University neuroscientist John Donoghue, Ph.D., who was part of the project team. “There are questions we can now answer that can only be tackled as a collaborative project,” not by individual labs.

In Dr. Donaghue's view, the problem is that the people developing novel technologies and the neuroscience community don’t communicate effectively. Biologists don't know enough about the tools already out there, and the materials scientists aren't getting feedback from them on ways to make their tools more useful.

Economic Incentives
And there’s no denying the economic incentives the project provides. “What motivates people to pursue these big projects is not the belief that they will solve problems,” says Michael Eisen, Ph.D., a biologist at the University of California, Berkeley. “It’s the belief that this is the way to get money.”

John Mazziotta, M.D., Ph.D., UCLA’s department of neurology chair and director of its Brain Mapping Center, says, “This initiative is more comprehensive than anything I’ve ever seen medicine and neuroscience. This effort will be both the stimulus and the challenge to work and collaborate in ways we haven’t done before, but always have wanted to.

UCLA will likely benefit handsomely from the initiative as it says it is “well-positioned” to play a significant role in the effort and to capture funding that will support such an initiative, owing to the existence Ahmanson-Lovelace Brain Mapping Center and its “excellence” in nanoscience and nanotechnology.

Dr. Church is also in favor of spreading the funding for the project around. In an interview with Harvard Medical School News last month, he said, “The Genome Project didn’t adequately embrace small science. I think enabling small labs to do amazing things might be more powerful than having a juggernaut of a large lab, or worse yet, a race among a few large labs.

A report from the Battelle Technology Partnership says that, between 1988 and 2010, federal investment in genomic research generated an economic impact of $796 billion, “impressive” considering that Human Genome Project (HGP) spending between 1990–2003 amounted to $3.8 billion and an ROI of 141:1.

Apart from job creation and ROI, if this massive initiative provides new treatment targets for intractable human neurological and psychiatric disorders, it will have been worth the investment.

Patricia Fitzpatrick Dimond, Ph.D. (pdimond@genengnews.com), is technical editor at Genetic Engineering & Biotechnology News.

Eciton burchellii, the swarm raider

ORIGINAL: Myrmecos.net
Feb 7th, 2011
by myrmecos


Eciton burchellii is, according to Wikipedia, “the archetypal species of army ant“. Insofar as this is the most-studied species, and the ant that dominates the nature documentaries, I suppose the moniker is true. 

Yet, the biology of E. burchellii is not terribly representative of army ants. It is an outlier, an ant whose behavior has diverged in significant ways from its relatives, even from its congeners

media and minor workers stream towards the front 
The primary difference is gustatory. Most army ants boast a fine-tuned pallate, favoring the brood of particular ant genera. E. burchellii is vulgar by comparison. It’ll eat just about any sort of animal protein. Spiders? Katydids? Lizards? Termites? Ants? They’re all good. 

a submajor worker carries a spider 
media workers pull a termite from a rotting log 
Corresponding to the broad diet, Eciton burchellii‘s foraging behavior is a radical departure from the army ant norm. For being an “archetypal” army ant, its raids ironically lack the military precision of other species. 
An overhead depiction of an Eciton burchellii swarm raid (from Rettenmeyer 1963) 
Instead of tight, focused columns that concentrate the ants’ efforts at a single point at the raid front, E. burchellii’s raids are messy, diffuse affairs. Foragers spread out in a swarm, casting a vicious stinging, biting net for animals too slow to escape. The strategy isn’t great for capturing concentrated food sources like another ant colony, but it works well for a generalist’s lunch. 

Raids set out in the morning with a swarm front near the overnight bivuoac. As the day progresses, the swarm moves through the forest for a distance about that of a football field. The back end of the swarm organizes into a series of converging trails to funnel captured prey back to the bivouac, so that by mid-day the swarm from above looks rather like the figure at left. Last night’s video featured one such trail. By nightfall the ants are back in the bivuoac, where, depending on the development of their larvae, they either regroup to march to a new bivouac site by morning, or they stay put and launch a new raid in a different direction the following day. 


Eciton burchellii raids are fascinating to watch, but they aren’t indicative of how most army ants behave. The fame of this one species is partly an artifact of the degree to which its unique biology intersects with the cognitive quirks of our own species. 

These ants are large, they forage above ground, and their sprawling raids cover extensive areas. And they raid primarily during the day, when we humans tend to walk about in the forest. That they catch our attention may be due as much to our biology as to theirs. 


[photos 1,2,4 taken at Jatun Sacha; 3,5 at Maquipucuna]

Extracted Oil And Gas Wastewater Causing Earthquakes?

March 29, 2013


A 2011 magnitude 5.7 quake in OK, linked to wastewater injection, buckled US Highway 62. (Credit: John Leeman)
After pulling massive amounts of fossil fuels out of the Earth’s crust so we can burn it up into our atmosphere, we have a good sense of where the stuff goes. Our oceans. A global greenhouse. Our lungs. But what happens to the ground formerly occupied by those fossil fuels?

It’s becoming increasingly clear that oil and gas extraction processes are actually weakening the structural integrity of the Earth’s crust just enough to cause more frequent earthquakes, in places not used to them.

Oklahoma, for instance, is not known for earthquakes. Yet the central U.S. has seen an elevenfold jump in recent years, including the Sooner State’s largest earthquake on record. This 5.7-magnitude quake occurred on November 6, 2011 near Prague, Oklahoma. And research published yesterday in Geology from the University of Oklahoma, Columbia University, and the U.S. Geological Survey has made a direct connection to the disposal of wastewater from conventional oil production:

A new study in the journal Geology is the latest to tie a string of unusual earthquakes, in this case, in central Oklahoma, to the injection of wastewater deep underground. Researchers now say that the magnitude 5.7 earthquake near Prague, Okla., on Nov. 6, 2011, may also be the largest ever linked to wastewater injection. Felt as far away as Milwaukee, more than 800 miles away, the quake — the biggest ever recorded in Oklahoma — destroyed 14 homes, buckled a federal highway and left two people injured. Small earthquakes continue to be recorded in the area.

The recent boom in U.S. energy production has produced massive amounts of wastewater. The water is used both in hydrofracking, which cracks open rocks to release natural gas, and in coaxing petroleum out of conventional oil wells. In both cases, the brine and chemical-laced water has to be disposed of, often by injecting it back underground elsewhere, where it has the potential to trigger earthquakes. The water linked to the Prague quakes was a byproduct of oil extraction at one set of oil wells, and was pumped into another set of depleted oil wells targeted for waste storage.

As Climate Progress has written before, this practice of disposing chemical-laced water generated during the extraction of oil and gas has far-reaching effects. Drillers have been doing this for more than a decade, and the researchers note that the Oklahoma quake did not actually require very much wastewater. In fact, because we have been doing this for so long, the built-up pressure in the Earth’s crust changes the criteria of how quakes happen. The study’s abstract notes:

Significantly, this case indicates that decades-long lags between the commencement of fluid injection and the onset of induced earthquakes are possible, and modifies our common criteria for fluid-induced events.

So we could be paying for more than a decade of wastewater injection and fracking for quite some time with earthquakes. There’s not much more room 9,000 feet down. Wellhead records indicate that pressure in these areas underground increased by a factor of ten from 2001 to 2006.

Fracking usually receives more attention for seismic activity than wastewater injection. Ohio banned fracking “to stop the ground from shaking.” But it’s the whole process of drilling (oil and gas), fracking, and then disposal that contributes to the problem.

A tanker truck prepares to leave OH Water plant that removes metals and chemicals from fracking wastewater. (Photo: Scott Galvin)
Can we stop doing this? Recycling the wastewater is cheaper, and more and more gas companies have started contracting out to do just that. But as Ohio Department of Natural Resources officials note, it’s hard to track where this water goes because it is not regulated. This is rather important because the water is laced with toxic metals, dangerous chemicals, and radium. Recycling companies say the waste ends up in landfills.

So the two options are to either inject it back down in the ground where it lubricates fault lines enough to cause earthquakes in Oklahoma and Ohio, or hope that radium doesn’t leak out of landfills.

Renewable fuels sound better and better the more we learn about enhanced drilling for unconventional oil and gas.

Authored by:
Joe Romm is a Fellow at American Progress and is the editor of Climate Progress, which New York Times columnist Tom Friedman called "the indispensable blog" and Time magazine named one of the 25 "Best Blogs of 2010." In 2009, Rolling Stone put Romm #88 on its list of 100 "people who are reinventing America." Time named him a "Hero of the Environment″ and “The Web’s most influential ...