domingo, 31 de agosto de 2014

Practopoiesis: How cybernetics of biology can help AI


By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.

When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization.
  1. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. 
  2. A yet different type of story is used to explain what our neural networks do. Needless to say, 
  3. the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.
But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.

One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?

For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.

So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare
  • an abacus, 
  • a hand-held calculator, 
  • a supercomputer, and 
  • a human intellect.
The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems
  • Gene expression mechanisms act; 
  • bacteria act; 
  • organs act; 
  • organisms as a whole act.
Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.

The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.

When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.

To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.

To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.

This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.

To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.

This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.

Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power. But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.

A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.

We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.

We can thus make the following hierarchical list of the best exemplars in each adaptive category:
  • A book: dumbest; zero traverses
  • A computer: somewhat smarter; one traverse
  • An AI system: much smarter; two traverses
  • A human: rules them all; three traverses
Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading:
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.

Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.

If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.

5 Robots Booking It to a Classroom Near You

Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here


ORIGINAL: Singularity Web

5 Robots Booking It to a Classroom Near You

IMAGE: ANDY BAKER/GETTY IMAGES

Robots are the new kids in school.

The technological creations are taking on serious roles in the classroom. With the accelerating rate of robotic technology, school administrators all over the world are plotting how to implement them in education, from elementary through high school.

In South Korea, robots are replacing English teachers entirely, entrusted with leading and teaching entire classrooms. In Alaska, some robots are replacing the need for teachers to physically be present at all.


Robotics 101 is now in session. Here are five ways robots are being introduced into schools.

1. Nao Robot as math teacher

IMAGE: WIKIPEDIA

In Harlem school PS 76, a Nao robot created in France, nicknamed Projo helps students improve their math skills. It's small, about the size of a stuffed animal, and sits by a computer to assist students working on math and science problems online.

Sandra Okita, a teacher at the school, told The Wall Street Journal the robot gauges how students interact with non-human teachers. The students have taken to the humanoid robotic peer, who can speak and react, saying it's helpful and gives the right amount of hints to help them get their work done.

2. Aiding children with autism


The Nao Robot also helps improve social interaction and communication for children with autism. The robots were introduced in a classroom in Birmingham, England in 2012, to play with children in elementary school. Though the children were intimidated at first, they've taken to the robotic friend, according to The Telegraph.

3. VGo robot for ill children


Sick students will never have to miss class again if the VGo robot catches on. Created by VGo Communications, the rolling robot has a webcam and can be controlled and operated remotely via computer. About 30 students with special needs nationwide have been using the $6,000 robot to attend classes.

For example, a 12-year-old Texas student with leukemia kept up with classmates by using a VGo robot. With a price tag of about $6,000, the robots aren't easily accessible, but they're a promising sign of what's to come.

4. Robots over teachers


In the South Korean town of Masan, robots are starting to replace teachers entirely. The government started using the robots to teach students English in 2010. The robots operate under supervision, but the plan is to have them lead a room exclusively in a few years, as robot technology develops.

5. Virtual teachers


IMAGE: FLICKR, SEAN MACENTEE
South Korea isn't the only place getting virtual teachers. A school in Kodiak, Alaska has started using telepresence robots to beam teachers into the classroom. The tall, rolling robots have iPads attached to the top, which teachers will use to video chat with students.

The Kodiak Island Borough School District's superintendent, Stewart McDonald, told The Washington Times he was inspired to do this because of the show The Big Bang Theory, which stars a similar robot. Each robot costs about $2,000; the school bought 12 total in early 2014.



jueves, 28 de agosto de 2014

DARPA Project Starts Building Human Memory Prosthetics

The first memory-enhancing devices could be implanted within four years

Photo: Lawrence Livermore National LaboratoryRemember This? Lawrence Livermore engineer Vanessa Tolosa holds up a silicon wafer containing micromachined implantable neural devices for use in experimental memory prostheses.

They’re trying to do 20 years of research in 4 years,” says Michael Kahana in a tone that’s a mixture of excitement and disbelief. Kahana, director of the Computational Memory Lab at the University of Pennsylvania, is mulling over the tall order from the U.S. Defense Advanced Research Projects Agency (DARPA). In the next four years, he and other researchers are charged with understanding the neuroscience of memory and then building a prosthetic memory device that’s ready for implantation in a human brain.

DARPA’s first contracts under its Restoring Active Memory (RAM) program challenge two research groups to construct implants for veterans with traumatic brain injuries that have impaired their memories. Over 270,000 U.S. military service members have suffered such injuries since 2000, according to DARPA, and there are no truly effective drug treatments. This program builds on an earlier DARPA initiative focused on building a memory prosthesis, under which a different group of researchers had dramatic success in improving recall in mice and monkeys.

Kahana’s team will start by searching for biological markers of memory formation and retrieval. For this early research, the test subjects will be hospitalized epilepsy patients who have already had electrodes implanted to allow doctors to study their seizures. Kahana will record the electrical activity in these patients’ brains while they take memory tests.

The memory is like a search engine,” Kahana says. “In the initial memory encoding, each event has to be tagged. Then in retrieval, you need to be able to search effectively using those tags.” He hopes to find the electric signals associated with these two operations.

Once they’ve found the signals, researchers will try amplifying them using sophisticated neural stimulation devices. Here Kahana is working with the medical device maker Medtronic, in Minneapolis, which has already developed one experimental implant that can both record neural activity and stimulate the brain. Researchers have long wanted such a “closed-loop” device, as it can use real-time signals from the brain to define the stimulation parameters.

Kahana notes that designing such closed-loop systems poses a major engineering challenge. Recording natural neural activity is difficult when stimulation introduces new electrical signals, so the device must have special circuitry that allows it to quickly switch between the two functions. What’s more, the recorded information must be interpreted with blistering speed so it can be translated into a stimulation command. “We need to take analyses that used to occupy a personal computer for several hours and boil them down to a 10-millisecond algorithm,” he says.

In four years’ time, Kahana hopes his team can show that such systems reliably improve memory in patients who are already undergoing brain surgery for epilepsy or Parkinson’s. That, he says, will lay the groundwork for future experiments in which medical researchers can try out the hardware in people with traumatic brain injuries—people who would not normally receive invasive neurosurgery.

The second research team is led by Itzhak Fried, director of the Cognitive Neurophysiology Laboratory at the University of California, Los Angeles. Fried’s team will focus on a part of the brain called the entorhinal cortex, which is the gateway to the hippocampus, the primary brain region associated with memory formation and storage. “Our approach to the RAM program is homing in on this circuit, which is really the golden circuit of memory,” Fried says. In a 2012 experiment, he showed that stimulating the entorhinal regions of patients while they were learning memory tasks improved their performance.

Fried’s group is working with Lawrence Livermore National Laboratory, in California, to develop more closed-loop hardware. At Livermore’s Center for Bioengineering, researchers are leveraging semiconductor manufacturing techniques to make tiny implantable systems. They first print microelectrodes on a polymer that sits atop a silicon wafer, then peel the polymer off and mold it into flexible cylinders about 1 millimeter in diameter. The memory prosthesis will have two of these cylindrical arrays, each studded with up to 64 hair-thin electrodes, which will be capable of both recording the activity of individual neurons and stimulating them. Fried believes his team’s device will be ready for tryout in patients with traumatic brain injuries within the four-year span of the RAM program.

Outside observers say the program’s goals are remarkably ambitious. Yet Steven Hyman, director of psychiatric research at the Broad Institute of MIT and Harvard, applauds its reach. “The kind of hardware that DARPA is interested in developing would be an extraordinary advance for the whole field,” he says. Hyman says DARPA’s funding for device development fills a gap in existing research. Pharmaceutical companies have found few new approaches to treating psychiatric and neurodegenerative disorders in recent years, he notes, and have therefore scaled back drug discovery efforts. “I think that approaches that involve devices and neuromodulation have greater near-term promise,” he says.

This article originally appeared in print as “Making a Human Memory Chip.

ORIGINAL: IEES Spectrum
By Eliza Strickland
Posted 27 Aug 2014

Everybody Relax: An MIT Economist Explains Why Robots Won't Steal Our Jobs

Living together in harmony. Photo by Oli Scarff/Getty Images

If you’ve ever found yourself fretting about the possibility that software and robotics are on the verge of thieving away all our jobs, renowned MIT labor economist David Autor is out with a new paper that might ease your nerves. Presented Friday at the Federal Reserve Bank of Kansas City’s big annual conference in Jackson Hole, Wyoming, the paper argues that humanity still has two big points in its favor: People have "common sense,” and they’re "flexible."

Neil Irwin already has a lovely writeup of the paper at the New York Times, but let’s run down the basics. There’s no question machines are getting smarter, and quickly acquiring the ability to perform work that once seemed uniquely human. Think self-driving cars that might one day threaten cabbies, or computer programs that can handle the basics of legal research.

But artificial intelligence is still just that: artificial. We haven’t untangled all the mysteries of human judgment, and programmers definitely can’t translate the way we think entirely into code. Instead, scientists at the forefront of AI have found workarounds like machine-learning algorithms. As Autor points out, a computer might not have any abstract concept of a chair, but show it enough Ikea catalogs, and it can eventually suss out the physical properties statistically associated with a seat. Fortunately for you and me, this approach still has its limits.

For example, both a toilet and a traffic cone look somewhat like a chair, but a bit of reasoning about their shapes vis-à-vis the human anatomy suggests that a traffic cone is unlikely to make a comfortable seat. Drawing this inference, however, requires reasoning about what an object is “for” not simply what it looks like. Contemporary object recognition programs do not, for the most part, take this reasoning-based approach to identifying objects, likely because the task of developing and generalizing the approach to a large set of objects would be extremely challenging.

That’s what Autor means when he says machines lack for common sense. They don’t think. They just do math.

And that leaves lots of room for human workers in the future.

Technology has already whittled away at middle class jobs, from factory workers replaced by robotic arms to secretaries made redundant by Outlook, over the past few decades. But Autor argues that plenty of today's middle-skill occupations, such as construction trades and medical technicians, will stick around, because “many of the tasks currently bundled into these jobs cannot readily be unbundled … without a substantial drop in quality.

These aren’t jobs that require performing a single task over and over again, but instead demand that employees handle some technical work while dealing with other human beings and improvising their way through unexpected problems. Machine learning algorithms can’t handle all of that. Human beings, Swiss-army knives that we are, can. We’re flexible.

Just like the dystopian arguments that machines are about to replace a vast swath of the workforce, Autor’s paper is very much speculative. It’s worth highlighting, though, because it cuts through the silly sense of inevitability that sometimes clouds this subject. Predictions about the future of technology and the economy are made to be dashed. And while Noah Smith makes a good point that we might want to be prepared for mass, technology-driven unemployment even if there’s just a slim chance of it happening, there’s also no reason to take it for granted.

Jordan Weissmann is Slate's senior business and economics correspondent.

ORIGINAL: Slate

It's Time to Take Artificial Intelligence Seriously

No Longer an Academic Curiosity, It Now Has Measurable Impact on Our Lives

A still from "2001: A Space Odyssey" with Keir Dullea reflected in the lens of HAL's "eye." MGM / POLARIS / STANLEY KUBRICK

The age of intelligent machines has arrived—only they don't look at all like we expected. Forget what you've seen in movies; this is no HAL from "2001: A Space Odyssey," and it's certainly not Scarlett Johansson's disembodied voice in "Her." It's more akin to what happens when insects, or even fungi, do when they "think." (What, you didn't know that slime molds can solve mazes?)

Artificial intelligence has lately been transformed from an academic curiosity to something that has measurable impact on our lives. Google Inc. used it to increase the accuracy of voice recognition in Android by 25%. The Associated Press is printing business stories written by it. Facebook Inc. is toying with it as a way to improve the relevance of the posts it shows you.

What is especially interesting about this point in the history of AI is that it's no longer just for technology companies. Startups are beginning to adapt it to problems where, at least to me, its applicability is genuinely surprising.

Take advertising copywriting. Could the "Mad Men" of Don Draper's day have predicted that by the beginning of the next century, they would be replaced by machines? Yet a company called Persado aims to do just that.

Persado does one thing, and judging by its client list, which includes Citigroup Inc. and Motorola Mobility, it does it well. It writes advertising emails and "landing pages" (where you end up if you click on a link in one of those emails, or an ad).

Here's an example: Persado's engine is being used across all of the types of emails a top U.S. wireless carrier sends out when it wants to convince its customers to renew their contracts, upgrade to a better plan or otherwise spend money.

Traditionally, an advertising copywriter would pen these emails; perhaps the company would test a few variants on a subset of its customers, to see which is best.

But Persado's software deconstructs advertisements into five components, including emotion words, characteristics of the product, the "call to action" and even the position of text and the images accompanying it. By recombining them in millions of ways and then distilling their essential characteristics into eight or more test emails that are sent to some customers, Persado says it can effectively determine the best possible come-on.

"A creative person is good but random," says Lawrence Whittle, head of sales at Persado. "We've taken the randomness out by building an ontology of language."

The results speak for themselves: In the case of emails intended to convince mobile subscribers to renew their plans, initial trials with Persado increased click-through rates by 195%, the company says.

Here's another example of AI becoming genuinely useful: X.ai is a startup aimed, like Persado, at doing one thing exceptionally well. In this case, it's scheduling meetings. X.ai's virtual assistant, Amy, isn't a website or an app; she's simply a "person" whom you cc: on emails to anyone with whom you'd like to schedule a meeting. Her sole "interface" is emails she sends and receives—just like a real assistant. Thus, you don't have to bother with back-and-forth emails trying to find a convenient time and available place for lunch. Amy can correspond fluidly with anyone, but only on the subject of his or her calendar. This sounds like a simple problem to crack, but it isn't, because Amy must communicate with a human being who might not even know she's an AI, and she must do it flawlessly, says X.ai founder Dennis Mortensen.

E-mail conversations with Amy are already quite smooth. Mr. Mortensen used her to schedule our meeting, naturally, and it worked even though I purposely threw in some ambiguous language about the times I was available. But that is in part because Amy is still in the "training" stage, where anything she doesn't understand gets handed to humans employed by X.ai.

It sounds like cheating, but every artificially intelligent system needs a body of data on which to "train" initially. For Persado, that body of data was text messages sent to prepaid cellphone customers in Europe, urging them to re-up their minutes or opt into special plans. For Amy, it's a race to get a body of 100,000 email meeting requests. Amusingly, engineers at X.ai thought about using one of the biggest public database of emails available, the Enron emails, but there is too much scheming in them to be a good sample.

Both of these systems, and others like them, work precisely because their makers have decided to tackle problems that are as narrowly defined as possible. Amy doesn't have to have a conversation about the weather—just when and where you'd like to schedule a meeting. And Persado's system isn't going to come up with the next "Just Do It" campaign.

This is where some might object that the commercialized vision for AI isn't intelligent at all. But academics can't even agree on where the cutoff for "intelligence" is in living things, so the fact that these first steps toward economically useful artificial intelligence lie somewhere near the bottom of the spectrum of things that think shouldn't bother us.

We're also at a time when it seems that advances in the sheer power of computers will lead to AI that becomes progressively smarter. So-called deep-learning algorithms allow machines to learn unsupervised, whereas both Persado and X.ai's systems require training guided by humans.

Last year Google showed that its own deep-learning systems could learn to recognize a cat from millions of images scraped from the Internet, without ever being told what a cat was in the first place. It's a parlor trick, but it isn't hard to see where this is going—the enhancement of the effectiveness of knowledge workers. Mr. Mortensen estimates there are 87 million of them in the world already, and they schedule 10 billion meetings a year. As more tools tackling specific portions of their job become available, their days could be filled with the things that only humans can do, like creativity.

"I think the next Siri is not Siri; it's 100 companies like ours mashed into one," says Mr. Mortensen.

—Follow Christopher Mims on Twitter @Mims or write to him atchristopher.mims@wsj.com.

By CHRISTOPHER MIMS
Aug. 24, 2014

ECOVOLT: The World’s First Bioelectrically Enhanced Wastewater Treatment System


EcoVolt

EcoVolt is a breakthrough wastewater treatment system that leverages electrically active microbes to create clean water and high quality renewable methane gas from wastewater. 

EcoVolt helps industrial beverage producers, particularly breweries, wineries, as well as food processing plants, generate energy from their wastewater streams, decreasing their carbon footprint & turning environmental liabilities into sources of revenue.

EcoVolt is ideal for wineries, breweries and other food and beverage producers that are:
  • Developing greenfield sites
  •  Expanding production
  •  Seeking greater energy and water efficiency
Developed and scaled over the past five years with funding from the National Science Foundation, EcoVolt is an anaerobic treatment process enhanced by newly discovered electrically active microbes. To learn more about EcoVolt technology, click here.

Cambrian Technology
EcoVolt Technology
Cambrian’s flagship product, EcoVolt Bioelectric Wastewater Treatment, leverages a particular kind of bioelectricity in a process called “electromethanogenesis”. During this process, electrically active organisms convert carbon dioxide and electricity into methane fuel. Biologically coated electrodes in the reactor rapidly convert organic pollutants into electricity and subsequently convert electricity into methane fuel.


The methane produced by EcoVolt is both high quality (near pipeline quality) and renewable, and can be used in a combined heat and power system to provide sustainable energy to the facility.

The process of electromethanogenesis was discovered in 2008 and subsequently commercialized by Cambrian Innovation Inc. It has a wide range of applications, including wastewater treatment and nutrient management.

Bioelectrochemical Technology
Bioelectrochemical systems (BES), also known as microbial electrochemical technologies or microbial fuel cells, are a new technology based on the ability of certain microbes (termed exoelectrogens) to generate electricity via direct contact with electrodes. Traditional fuel cells and electrochemical systems use chemical catalysts that oxidize fuel (such as hydrogen) at anodes, and reduce oxygen at cathodes. A circuit between the anode and the cathode captures electrical energy released in the process.

BES technology can be thought of as fuel cells with a regenerative, living microbial catalyst. These microbes are capable of oxidizing and reducing a broad range of organic fuels including negative cost fuel such as wastewater. The technology works because the exoelectrogenic bacteria can respire through direct contact with the electrodes in our systems. BES have a range of advantages over current technologies depending on the exact domain of application.


The EcoVolt treatment system includes a “headworks” unit for wastewater conditioning and expandable EcoVolt modules for wastewater treatment and gas generation. A combined heat and power system can be included in the package to convert biogas into clean heat and electricity. The modular system is pre-fabricated for low-cost installation. Download an EcoVolt Brochure now.
Why Select EcoVolt?

Traditional treatment systems, like aerated ponds in the wine industry, consume energy and land, costing hundreds of thousands of dollars per year depending on the site. EcoVolt reverses this balance, tapping the natural energy already present in the wastewater, converting a power draw into a source of energy.

Clean Energy Generation

EcoVolt generates clean electricity and clean heat directly from industrial wastewater streams. High quality, renewable biogas created within the reactor is captured and used as a fuel in a combined heat and power system. A typical installation can create 30 – 200 kW of power.

Robust Wastewater Treatment

EcoVolt’s proprietary bioelectric process is robust and adaptable to a range of wastewater streams, and therefore particularly suited to varying BOD loads that are typically found in the food and beverage industry.

Prefabricated, Turn-key Installation

EcoVolt installations feature a prefabricated and modular design, reducing non-recurring engineering costs and greatly reducing install time and cost. The headworks can be designed to accommodate a high number of modular EcoVolt tanks, creating a low capex option to expand production at any point in the future.

Automated, Remote Operation

Leveraging, for the first time, a bioelectrochemical treatment process, EcoVolt systems automatically monitor the health of constituent microbial populations, enabling automated and/or remote control of the treatment process, and radically decreasing operator intensiveness.

Sustainable Water Management

Water is an increasingly precious resource and industries globally are moving towards the reuse of process water. The Cambrian EcoVolt system can form the basis for varying degrees of water reuse, whether for irrigation, tank washing or production. Contact the EcoVolt team for more information.


ORIGINAL: Cambrian Innovation

miércoles, 27 de agosto de 2014

Ray Kurzweil: Get ready for hybrid thinking



Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

ORIGINAL: TED
Jun 2, 2014

martes, 26 de agosto de 2014

Biomimicry Chair Could Change Furniture as we Know It


A common complaint about 3D printing is that it is not capable of producing things in a wide range of materials. Industrial printers can currently print items from wood, metal, plastics, and… thats basically it. If you aren’t the owner of a successful manufacturing firm and you’re just an average consumer or hobbyist looking to try your hand at 3D printing, then the only available options for you come in the form of cheap, hard plastics like ABS and PLA. This would be great if you wanted to make figurines, jewelry for little girls or even handy devices, parts and gadgets. But if you wanted to print something bigger or softer, you simply couldn’t. Until now.

The Royal Academy of Art, The Hague graduate student Lilian van Daal has created a 3D printed chair out of a single recyclable material influenced by plant materials. The chair was made as an alternative to traditional furniture that is usually upholstered and glued together from many different materials, which makes it difficult to recycle.
Source: Dezeen

A lot of materials are used in normal furniture production, including several types of foam, and it’s very difficult to recycle because everything is glued together,” Van Daal told Dezeen.

Van Daal, a design student, decided to experiment with various materials and design methods to see if it would be possible to create environmentally-friendly furniture. By using 3D printing technology and mimicking plant cell structures, van Daal was able to produce the ‘Biomimicry’ chair.

Unlike most 3D printed objects, the biomimicry chair is not entirely hard. Van Daal mimicked plant cells by distributing the material differently at different parts of the chair. She decreased the density of the material in certain places and increased it in others. This enabled some sections of the chair to be soft and some to be hard – yet the entire chair was made from just one material.

Source: Dezeen

I was testing the flexibility and the stiffness you can get from one material by 3D printing various structures,” said Van Daal. “I did lots of experiments with different structures to identify the kind of properties each structure has. When you adjust the structure a little bit you immediately get a different function. In the strong parts I used as little material as possible but enough to still have the good stiffness.

According to Dezeen, Van Daal is currently in talks with furniture companies to discuss developing the project further. Not only is her concept much more environmentally-friendly than traditional furniture, but it also saves costs associated with purchasing multiple materials and having to transport them all separately to a factory. The ability to 3D print a soft, recyclable chair is a big step both for furniture and also for 3D printing technology, as the stigma of being able to print only rock hard objects is slowly diminishing.

ORIGINAL: Inside 3DP
15Aug 2014

lunes, 25 de agosto de 2014

Why a deep-learning genius left Google & joined Chinese tech shop Baidu (interview)

Image Credit: Jordan Novet/VentureBeat

SUNNYVALE, California — Chinese tech company Baidu has yet to make its popular search engine and other web services available in English. But consider yourself warned: Baidu could someday wind up becoming a favorite among consumers.

The strength of Baidu lies not in youth-friendly marketing or an enterprise-focused sales team. It lives instead in Baidu’s data centers, where servers run complex algorithms on huge volumes of data and gradually make its applications smarter, including not just Web search but also Baidu’s tools for music, news, pictures, video, and speech recognition.

Despite lacking the visibility (in the U.S., at least) of Google and Microsoft, in recent years Baidu has done a lot of work on deep learning, one of the most promising areas of artificial intelligence (AI) research in recent years. This work involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.

Two months ago, Baidu hired Andrew Ng away from Google, where he started and led the so-called Google Brain project. Ng, whose move to Baidu follows Hugo Barra’s jump from Google to Chinese company Xiaomi last year, is one of the world’s handful of deep-learning rock stars.

Ng has taught classes on machine learning, robotics, and other topics at Stanford University. He also co-founded massively open online course startup Coursera.

He makes a strong argument for why a person like him would leave Google and join a company with a lower public profile. His argument can leave you feeling like you really ought to keep an eye on Baidu in the next few years.

I thought the best place to advance the AI mission is at Baidu,” Ng said in an interview with VentureBeat.

Baidu’s search engine only runs in a few countries, including China, Brazil, Egypt, and Thailand. The Brazil service was announced just last week. Google’s search engine is far more popular than Baidu’s around the globe, although Baidu has already beaten out Yahoo and Microsoft’s Bing in global popularity, according to comScore figures.

And Baidu co-founder and chief executive Robin Li, a frequent speaker on Stanford’s campus, has said he wants Baidu to become a brand name in more than half of all the world’s countries. Presumably, then, Baidu will one day become something Americans can use.

Above: Baidu co-founder and chief executive Robin Li.
Image Credit: Baidu
Now that Ng leads Baidu’s research arm as the company’s chief scientist out of the company’s U.S. R&D Center here, it’s not hard to imagine that Baidu’s tools in English, if and when they become available, will be quite brainy — perhaps even eclipsing similar services from Apple and other tech giants. (Just think of how many people are less than happy with Siri.)

A stable full of AI talent
But this isn’t a story about the difference a single person will make. Baidu has a history in deep learning.

A couple years ago, Baidu hired Kai Yu, a engineer skilled in artificial intelligence. Based in Beijing, he has kept busy.

I think Kai ships deep learning to an incredible number of products across Baidu,” Ng said. Yu also developed a system for providing infrastructure that enables deep learning for different kinds of applications.

That way, Kai personally didn’t have to work on every single application,” Ng said.

In a sense, then, Ng joined a company that had already built momentum in deep learning. He wasn’t starting from scratch.
Above: Baidu’s Kai Yu.
Image Credit: Kai Yu

Only a few companies could have appealed to Ng, given his desire to push artificial intelligence forward. It’s capital-intensive, as it requires lots of data and computation. Baidu, he said, can provide those things.

Baidu is nimble, too. Unlike Silicon Valley’s tech giants, which measure activity in terms of monthly active users, Chinese Internet companies prefer to track usage by the day, Ng said.

It’s a symptom of cadence,” he said. “What are you doing today?” And product cycles in China are short; iteration happens very fast, Ng said.

Plus, Baidu is willing to get infrastructure ready to use on the spot.

Frankly, Kai just made decisions, and it just happened without a lot of committee meetings,” Ng said. “The ability of individuals in the company to make decisions like that and move infrastructure quickly is something I really appreciate about this company.

That might sound like a kind deference to Ng’s new employer, but he was alluding to a clear advantage Baidu has over Google.

He ordered 1,000 GPUs [graphics processing units] and got them within 24 hours,Adam Gibson, co-founder of deep-learning startup Skymind, told VentureBeat. “At Google, it would have taken him weeks or months to get that.

Not that Baidu is buying this type of hardware for the first time. Baidu was the first company to build a GPU cluster for deep learning, Ng said — a few other companies, like Netflix, have found GPUs useful for deep learning — and Baidu also maintains a fleet of servers packing ARM-based chips.
Above: Baidu headquarters in Beijing.
Image Credit: Baidu

Now the Silicon Valley researchers are using the GPU cluster and also looking to add to it and thereby create still bigger artificial neural networks.

But the efforts have long since begun to weigh on Baidu’s books and impact products. “We deepened our investment in advanced technologies like deep learning, which is already yielding near term enhancements in user experience and customer ROI and is expected to drive transformational change over the longer term,” Li said in a statement on the company’s earnings the second quarter of 2014.

Next step: Improving accuracy
What will Ng do at Baidu? The answer will not be limited to any one of the company’s services. Baidu’s neural networks can work behind the scenes for a wide variety of applications, including those that handle text, spoken words, images, and videos. Surely core functions of Baidu like Web search and advertising will benefit, too.

All of these are domains Baidu is looking at using deep learning, actually,” Ng said.

Ng’s focus now might best be summed up by one word: accuracy.

That makes sense from a corporate perspective. Google has the brain trust on image analysis, and Microsoft has the brain trust on speech, said Naveen Rao, co-founder and chief executive of deep-learning startup Nervana. Accuracy could potentially be the area where Ng and his colleagues will make the most substantive progress at Baidu, Rao said.

Matthew Zeiler, founder and chief executive of another deep learning startup, Clarifai, was more certain. “I think you’re going to see a huge boost in accuracy,” said Zeiler, who has worked with Hinton and LeCun and spent two summers on the Google Brain project.

One thing is for sure: Accuracy is on Ng’s mind.
Above: The lobby at Baidu’s office in Sunnyvale, Calif.
Image Credit: Jordan Novet/VentureBeat

Here’s the thing. Sometimes changes in accuracy of a system will cause changes in the way you interact with the device,” Ng said. For instance, more accurate speech recognition could translate into people relying on it much more frequently. Think “Her”-level reliance, where you just talk to your computer as a matter of course rather than using speech recognition in special cases.

Speech recognition today doesn’t really work in noisy environments,” Ng said. But that could change if Baidu’s neural networks become more accurate under Ng.

Ng picked up his smartphone, opened the Baidu Translate app, and told it that he needed a taxi. A female voice said that in Mandarin and displayed Chinese characters on screen. But it wasn’t a difficult test, in some ways: This was no crowded street in Beijing. This was a quiet conference room in a quiet office.

There’s still work to do,” Ng said.

‘The future heroes of deep learning’
Meanwhile, researchers at companies and universities have been hard at work on deep learning for decades.

Google has built up a hefty reputation for applying deep learning to images from YouTube videos, data center energy use, and other areas, partly thanks to Ng’s contributions. And recently Microsoft made headlines for deep-learning advancements with its Project Adam work, although Li Deng of Microsoft Research has been working with neural networks for more than 20 years.

In academia, deep learning research groups all over North America and Europe. Key figures in the past few years include Yoshua Bengio at the University of Montreal, Geoff Hinton of the University of Toronto (Google grabbed him last year through its DNNresearch acquisition), Yann LeCun from New York University (Facebook pulled him aboard late last year), and Ng.

But Ng’s strong points differ from those of his contemporaries. Whereas Bengio made strides in training neural networks, LeCun developed convolutional neural networks, and Hinton popularized restricted Boltzmann machines, Ng takes the best, implements it, and makes improvements.

Andrew is neutral in that he’s just going to use what works,” Gibson said. “He’s very practical, and he’s neutral about the stamp on it.

Not that Ng intends to go it alone. To create larger and more accurate neural networks, Ng needs to look around and find like-minded engineers.

He’s going to be able to bring a lot of talent over,Dave Sullivan, co-founder and chief executive of deep-learning startup Ersatz Labs, told VentureBeat. “This guy is not sitting down and writing mountains of code every day.

And truth be told, Ng has had no trouble building his team.

Hiring for Baidu has been easier than I’d expected,” he said.

A lot of engineers have always wanted to work on AI. … My job is providing the team with the best possible environment for them to do AI, for them to be the future heroes of deep learning.

More information:


Google's innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob... read more »


Powered by VBProfiles

ORIGINAL: VentureBeat
July 30, 2014 8:03 AM