jueves, 29 de septiembre de 2016

This Company Wants To Stop Our Algae Epidemic By Sucking It Up And Turning It Into Plastic

The combination of rising temperatures and industrial chemicals in our water is creating a lot of algae. Bloom thinks it has a solution to clean up our lakes—and make products in the process.

Bloom uses custom technology to carefully harvest wild algae from the water.

There, they collect algae from the top six inches or so of the water column.
The design, with screens and gentle suction, can't harm wildlife in the water.
Looking at the thick green layer of slime on some Florida beaches, most people see only the environmental crisis: out-of-control algae, fed by human activity and climate change, are killing fish, manatees, and other underwater life. Rob Falken sees a solvable problem and your next pair of shoes.


Along with the algae, the harvester also removes nitrogen and phosphorus.
Bloom, Falken's startup, uses custom technology to carefully harvest wild algae from the water, and then transforms it into a raw material to make plastics and foams for use in clothing, sneakers, car upholstery, and other products. The first product—a foam traction pad for surfboards, made with Kelly Slater—will hit shelves on October 1.

All of those products are typically made from petroleum formed into tiny pellets. Bloom makes pellets from algae instead, solar-drying the algae into flakes, pulverizing the flakes into a powder, and then turning that powder into pellets that can be melted down to merge with the petroleum-based ingredients. By partially replacing the pellets made from fossil fuels—and by sucking up carbon as it grows—the algae also helps lower carbon footprints.

"The end goal is to remove as much of the petroleum feedstock as possible," says Falken. "When you take a waste stream from nature—there naturally but there in such mass because of man made inputs—we can take that feedstock, that problem, and functionalize it into usable goods that are the exact same quality, indistinguishable, from the status quo that's out there today."

The company's small mobile harvesting units sit at the edge of a pond or lake, or float on a pontoon in the ocean, and collect algae from the top six inches or so of the water column.
The company's small mobile harvesting units sit at the edge of a pond or lake, or float on a pontoon in the ocean.
"The harvester works like a giant vacuum, basically," Falken says. The design, with screens and gentle suction, can't harm wildlife in the water; the technology was used first at catfish farms, where sucking up a fish with the algae would be an obvious problem.

Along with the algae, the harvester also removes nitrogen and phosphorus—excess fertilizers that end up in the water from farming, sewage overflows, or lawns, and help cause the algae growth in the first place. Pure, filtered water is returned to the body of water.

In Florida, where officials declared a state of emergency in several counties because of algae blooms this summer, much of the problem comes from giant Lake Okeechobee, where runoff from sugar cane plantations and cattle farms fills the lake with algae-boosting fertilizer. Infrastructure and development in the area have made the problem worse. After heavy rains, the state flushes the algae-filled water out through canals, and it ends up also harming beaches at the coasts.

As the algae proliferate in the water, they can cut off oxygen. When the growth is out of control, and the algae die, they release toxins called microcystins that can last for weeks or months.


Bloom hopes to control the problem in Florida—and many other places struggling with algae—by regularly harvesting algae before it reaches a toxic state, and clearing out the fertilizers that cause future overgrowth. While the company can clean out toxic algae, only healthy algae is usable, so it's better to catch the problem before it escalates.

"When you have an algae crisis today, that's because of a lot of negligence, that's because nobody's doing anything to remove the inputs of nitrogen and phosphorus, and there's a massive influx of those inputs running rampant," says Falken. "You also couple that with an extremely high heat index and you've got a perfect storm."

The company has been operating in China at Lake Taihu—an even larger lake than Okeechobee that millions of people rely on for drinking water—for two years, where the company says they have removed millions of pounds of algae.

Now, they're in meetings with Florida officials, along with the infrastructure company AECOM, to make a plan for demonstrating the technology in the state. They hope to begin regularly working at Lake Okeechobee.

While other companies grow algae in tanks, wild harvesting has advantages—the process solves an environmental problem, and doesn't require the energy and cost used to grow algae in the dark. Algae grown in tanks is also genetically engineered, and Bloom argues that it could wipe out natural strains if it escaped into the wild.

The algae-based feedstock can be dropped into current manufacturing without any changes, and the cost is the same as petroleum-based feedstocks on the market today. "We can't convert industries worldwide if the price is higher," Falken says.
Then it transforms it into a raw material to make plastics and foams for use in clothing, sneakers, car upholstery, and other products.
There's also no shortage of wild algae, especially as warming waters make the problem worse. "We've already got more algae than we'll ever need," he says. "In China, Lake Taihu could produce enough algae for us to produce a pair of shoes for every man, woman, and child on this planet."

Even as some governments try to address the larger problems—Florida, for example, plans to spend more than $1 billion buying land to create storage ponds in the hope of naturally treating water—the process isn't guaranteed to work, and will take time.

"Those inputs are not going away," says Falken. "Agriculture's not going away, sugar cane plantations aren't going away, people are not going to en masse stop using fertilizers on their lawn. It's unfortunate. We can educate as much as possible, but the reality here is someone has got to be proactive. Because we can do something with it, and do a lot of good with it, we can ensure that algal bloom crisis is in time a thing of the past."

They hope to use the algae harvesters all over the country and world. "You look at Florida and say that's the epicenter, that's where the crisis is worst because all of the water policy issues and all the negligence," he says. "But if you look at the U.S. as a whole, all 50 states have algal bloom in some semi-crisis mode right now. You've got about 20 states that are really at peak crisis. The algae is everywhere, and the problems are global."

ORIGINAL: Fast Company
09.29.16

New Memristor Circuit Mimicks Synapses in the Brain


Illustration: University of Massachusetts, Amherst
To a human brain, picking one particular image out of a thousand is an easy task. Billions of neurons and the synapses that connect them can quickly process information in parallel to make a decision. Seeking to make such processing that easy for machines, scientists and engineers have been working with devices called memristors, which have some similar behaviors to neural synapses. Engineers at the University of Massachusetts report this week that they’ve invented a memristor circuit that matches a synapses’ behavior more closely than any before.

First predicted in 1971 and invented in 2008 by HP, memristors are so named because they remember how much voltage you applied across the device and how long you applied it, storing the information as a change in resistance. HP engineers noticed immediately that such a characteristic was similar to the way the synaptic connection between neurons strengthens with use to form a memory.

Memristors are “kind of the ideal candidate in many aspect” says Jianhua Yang, an electrical engineer at the University of Massachusetts Amherst working on improving their behavior. Low-power, simple circuits of memristors could improve the ability of computers to solve power-intensive computer vision and machine learning tasks that human brains handle with little effort.

In research published in Nature Materials on 26 September, Yang and his team mimicked a crucial underlying component of how synaptic connections get stronger or weaker: the flow of calcium.

The movement of calcium into or out of the neuronal membrane, neuroscientists have found, directly affects the connection. Chemical processes move the calcium in and out— triggering a long-term change in the synapses’ strength. 2015 research in ACS NanoLetters and Advanced Functional Materials discovered that types of memristors can simulate some of the calcium behavior, but not all.

In the new research, Yang combined two types of memristors in series to create an artificial synapse. The hybrid device more closely mimics biological synapse behavior—the calcium flow in particular, Yang says.

The new memristor used--called a diffusive memristor because atoms in the resistive material move even without an applied voltage when the device is in the high resistance state—was a dialectric film sandwiched between Pt or Au electrodes. The film contained Ag nanoparticles, which would play the role of calcium in the experiments.

By tracking the movement of the silver nanoparticles inside the diffusive memristor, the researchers noticed a striking similarity to how calcium functions in biological systems.

A voltage pulse to the hybrid device drove silver into the gap between the diffusive memristor’s two electrodes–creating a filament bridge. After the pulse died away, the filament started to break and the silver moved back— resistance increased.

Like the case with calcium, a force made silver go in and a force made silver go out.

To complete the artificial synapse, the researchers connected the diffusive memristor in series to another type of memristor that had been studied before.

When presented with a sequence of voltage pulses with particular timing, the artificial synapse showed the kind of long-term strengthening behavior a real synapse would, according to the researchers. “We think it is sort of a real emulation, rather than simulation because they have the physical similarity,” Yang says.

He says the next step is to better understand the mechanisms. Then, his team plans to combine the artificial synapses into arrays and eventually build bio-inspired circuits.

We can have a more direct, more natural, and a more complete emulation to the synaptic system using the silver-based system,” he says.

ORIGINAL: IEEE Spectrum
By Andrew Silver
Posted 29 Sep 2016

Hydroelectric Power Isn’t as Green as We Thought

Dam it. (Actually, on second thought, maybe don’t.)

Building a dam to generate electricity from water sounds like a renewable energy no-brainer. But the resulting reservoirs may have a more detrimental effect on our climate than we realized.

According to research from Washington State University that’s due to be published in the journal BioScience next week, the reservoirs formed by dams emit more methane per unit area than expected. As Sciencereports, the measurement of its release from these kinds of bodies of water has been more difficult than for other gases, like carbon dioxide, because instead of diffusing out of the water it emerges in bubbles.

New techniques to measure methane bubbles, though, have allowed the Washington State University team to calculate the rate of release more accurately. And the results show that reservoirs typically emit 25 percent more methane than previously thought. That may not sound too bad, but it’s worth remembering that methane is around 30 times more potent as a greenhouse gas than carbon dioxide, so even small quantities can have a great impact.

The Hoover Dam in Nevada.
Meanwhile, the world is building new hydroelectric installations apace. According to a paper published in the journal Aquatic Sciences last year, as many as 3,700 hydropower dams will come online in the next 10 to 20 years. That is expected to provide over 700 gigawatts of extra capacity around the world—about 70 percent of the the total installed capacity across the whole of the U.S.

Clearly, hydroelectric plants are by no means as polluting as fossil-fuel energy production. But their rapid construction over the coming years will have a larger impact on our emissions than we hoped. Dam it.

September 29, 2016

miércoles, 28 de septiembre de 2016

The Patent about an Organic Non Chemical pesticide that could change the World…


If there’s anything you read – or share – let this be it. The content of this article has potential to radically shift the world in a variety of positive ways. And as Monsanto would love for this article to not go viral, all we can ask is that you share, share, share the information being presented so that it can reach as many people as possible.

In 2006, a patent was granted to a man named Paul Stamets. Though Paul is the world’s leading mycologist, his patent has received very little attention and exposure. Why is that? Stated by executives in the pesticide industry, this patent represents “the most disruptive technology we have ever witnessed. And when the executives say disruptive, they are referring to it being disruptive to the chemical pesticides industry.

What has Paul discovered? 
The mycologist has figured out how to use mother nature’s own creations to keep insects from destroying crops. It’s what is being called SMART pesticides. These pesticides provide safe & nearly permanent solution for controlling over 200,000 species of insects – and all thanks to the ‘magic’ of mushrooms.

Paul does this by taking entomopathogenic Fungi (fungi that destroys insects) and morphs it so it does not produce spores. In turn, this actually attracts the insects who then eat and turn into fungi from the inside out!


This patent has potential to revolutionize the way humans grow crops – if it can be allowed to reach mass exposure.

To tolerate the use of pesticides in modern agriculture is to deny evidence proving its detrimental effects against the environment. Such ignorance really can no longer be tolerated. For example, can you imagine a world without bees?

Monsanto’s chemical concoctions which are being sprayed all over farmers’ fields around the world are attributed to the large-scale bee die off. While a growing number of countries are banning Monsanto, it’s still being used in in nations who should be aware of its dangers. To say that new methods need to be implemented before it is too late is an understatement

How To Beat Monsanto At Their Own Game...
We don't need Monsanto's products. Learn how to make your own natural & effective pesticides at home to keep your garden organic. Subscribe to our free newsletter now & discover just how easy it really is with our organic gardening & self-sufficiency tips. Enter your details below & click the instant access button...

Monsanto presently generates $16 billion dollars per year (as reported in 2014), therefore you can be certain they do not want anything interrupting that flow of revenue. Such income gives them nearly limitless resources and abilities to suppress information that may be damaging their reputation.


But by becoming educated on the benefits of growing sustainable, organic, and bio-dynamic food, sharing articles like this, and boycotting GMO & herbicide-sprayed crops, the corporate demon may soon get the message.

Here are helpful links to understand more about the incredible patent discussed above:

Here is a link to the patent we are speaking of…
http://www.google.com/patents/US7122176

A list of all the patents Paul has applied for:
http://patents.justia.com/inventor/paul-edward-stamets

Plenty of information about Paul Stamets:
http://www.fungi.com/about-paul-stamets.html

Wikipedia page about Paul Stamets:
http://en.m.wikipedia.org/wiki/Paul_Stamets

And finally, here is a TedTalks video by Paul in 2008 called:
6 Ways Mushrooms Can Save The World…
ORIGINAL: Source: EWAO

If you like this idea, be sure to share it with your friends and inspire someone you know. Anything becomes possible with just a little inspiration…

martes, 27 de septiembre de 2016

Yes, you really can make your own EpiPen for $30

Greg Friese/Flickr
Thank you, biohackers.

In the latest example of corporate greed in the pharmaceutical world, the US state of West Virginia announced today that it's investigating the makers of the EpiPen for Medicaid fraud - which means they think it's defrauded the US government healthcare system.

More specifically, it's accusing manufacturers Mylan of inflating the price of EpiPens by almost 500 percent since they purchased the life-saving device back in 2007

Since then, the cost of a single EpiPen has gone from around US$57 to $318 - a 461 percent increase. Which is pretty frustrating when you consider that many people with allergies need to keep the medication on them at all times in case of going into life-threatening anaphylaxis. Anaphylaxis can be triggered by anything from a bee sting to eating trace amounts of peanut.

In the face of the public backlash over their price rises, at the end of last month, Mylan announced they'd be releasing a generic version of the EpiPen that would cost only $150 per injection.

But industry insiders were quick to criticise this apparent act of goodwill, with pharmaceutical experts telling NBC News earlier this month that they estimated an EpiPen would only cost around $30 to make.

Now a bio-hacking collective called Four Thieves Vinegar has tested that claim out for themselves, and shown you really can engineer your own DIY EpiPen - which they called the "EpiPencil" - for around $35. And they claim it works as well as the $300 version - although we definitely don't recommend you try it at home.

The main difference between their version and the one you can buy at the pharmacy is that you have to measure out the correct dose of epinephrine before using the DIY version.

"We've gotten many requests to do something about the EpiPen, so we have,"says Michael Laufer, one of the founders of Four Thieves Vinegar, who has a PhD in mathematics from the City University of New York.

"We developed the EpiPencil, which is an epinephrine auto-injector built entirely from off-the-shelf parts, which can be assembled in a matter of minutes for just over $30."

EpiPens are designed as 'last resort' devices that are filled with epinephrine, an adrenaline drug that's more than 100 years old. The drug itself isn't patented, but what makes the EpiPen so attractive is the fact that its design lets pretty much anyone use it - which is handy in emergency situations.

So why haven't many other companies stepped up as competition and made an EpiPen equivalent to rival Mylan's? As Jamie Condliffe explains for MIT Technology Review, a big issue is the patent problem.

Mylan has the patent on the auto-injecting device up until 2025, and while it would be possible to build another type of model that does the same thing, it makes things a lot tricker.

"[There's] fear of creating a device that doesn’t work reliably, and a regulatory process that makes getting products to market incredibly difficult," writes Condliffe.

Four Thieves Vinegar has now published a video and fully downloadable instructions on how to make your own DIY EpiPencil at home. 

To be clear, we're definitely not recommending you go out and make your own EpiPen. The Four Thieves Vinegar version is not only totally unregulated, but it also hasn't been shown to reliably work for everyone - something that would require years of clinical trials and peer-reviewed papers.

"It's essential to remember that epinephrine auto-injectors are life-saving products, and it is critical that they are made to a high standard of quality so patients can rely on them to work safely and effectively," said US Food and Drug Administration spokesperson, Theresa Eisenman. 

But as an experiment to show that the EpiPen really can be created for around $30 - and with non-bulk parts at that - the Four Thieves Vinegar DIY version definitely makes its point. And hopefully it reminds people that they shouldn't have to pay ridiculous amounts for life-saving medicine.

"You know there are people who are just not buying an EpiPen because they can’t afford it," Laufer told The Parallax. "That’s unconscionable."


With West Virginia's new investigation and the public still pretty pissed off about the cost of EpiPens, it'll be interesting to see what happens next. Your move, Mylan.


Four Thieves Vinegar Biohacking Collective's Mantra: "Free Medicine for Everyone"



People are disenfranchised from access to medicine for various reasons. To circumvent these, we have developed a way for individuals to manufacture their own medications. We have designed an open-source automated lab reactor, which can be built with off-the-shelf parts, and can be set to synthesize different medications. This will save hundreds of thousands of lives.

The main reasons for people being disenfranchised from medicines are: price, legality, and lack of infrastructure. Medicines like Solvadi which costs $80,000 for a course of treatment, is beyond the reach of most people. Mifepristone and Misoprostal are unavailable in many places where abortion is illegal. Antiretroviral HIV treatments even when provided free, have no way of getting to remote locations in 3rd world countries.

The design will be published online, along with synthesis programs. The system will also have a forum system for users to communicate and contribute to the development of the system. With time, the system will become self-sustaining, much like other open source movements.

Original: Science Alert
FIONA MACDONALD
21 SEP 2016
Original: Four Thieves Vinegar.org

Soft Robot With Microfluidic Logic Circuit



Perhaps our future overlords won’t be made up of electrical circuits after all but will instead be soft-bodied like ourselves. However, their design will have its origins in electrical analogues, as with the Octobot.

The Octobot is the brainchild a team of Harvard University researchers who recently published an article about it in Nature. Its body is modeled on the octopus and is composed of all soft body parts that were made using a combination of 3D printing, molding and soft lithography. Two sets of arms on either side of the Octobot move, taking turns under the control of a soft oscillator circuit. You can see it in action in the video below.
Octobot mechanical and electrical analogue circuits (credit: Michael Wehner at al./Nature)

As shown in the diagram, the fuel is a liquid hydrogen peroxide (H2O2) which the oscillator gets from one of two fuel reservoirs and feeds into one of two reaction chambers. In the oscillator, pinch valves act like JFETs. When fuel from one reservoir is flowing into one reaction chamber, one of the pinch valves pinches off the flow of fuel to the other reaction chamber. It’s not clear how but somehow or other that fuel flow is then pinched off by another pinch valve as fuel then flows from the other reservoir to the other reaction chamber.

The reaction chamber contains a small amount of platinum as a catalyst which reacts with the hydrogen peroxide to release a much larger volume of oxygen gas into actuators in the arms. Those actuators expand like balloons causing the arms to move. The reaction chambers are the analogues of amplifiers. Other analogues are check valves for diodes, vent orifices for resistors as well as other chambers which appear to be capacitors.

This is a proof of concept and as yet the Octobot doesn’t walk but the team hopes to make one that can crawl, swim and interact with its environment. When it does we look forward to it joining this other soft-bodied bot modeled after a stingray. It looks like our overlords might all come from the sea.


Here’s you can see the Octobot in action.



And here’s another video from Harvard demonstrating the chemical reaction between hydrogen peroxide and platinum that produces oxygen. ("Powering the Octobot: A chemical reaction")



viernes, 23 de septiembre de 2016

Show and Tell: image captioning open sourced in TensorFlow

 In 2014, research scientists on the Google Brain team trained a machine learning system to automatically produce captions that accurately describe images. Further development of that system led to its success in the Microsoft COCO 2015 image captioning challenge, a competition to compare the best algorithms for computing accurate image captions, where it tied for first place. Today, we’re making the latest version of our image captioning system available as an open source model in TensorFlow. This release contains significant improvements to the computer vision component of the captioning system, is much faster to train, and produces more detailed and accurate descriptions compared to the original system. These improvements are outlined and analyzed in the paper Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge, published in IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatically captioned by our system.
So what’s new?  Our 2014 system used the Inception V1 image classification model to initialize the image encoder, which produces the encodings that are useful for recognizing different objects in the images. This was the best image model available at the time, achieving 89.6% top-5 accuracy on the benchmark ImageNet 2012 image classification task. We replaced this in 2015 with the newer Inception V2 image classification model, which achieves 91.8% accuracy on the same task.The improved vision component gave our captioning system an accuracy boost of 2 points in the BLEU-4 metric (which is commonly used in machine translation to evaluate the quality of generated sentences) and was an important factor of its success in the captioning challenge.Today’s code release initializes the image encoder using the Inception V3 model, which achieves 93.9% accuracy on the ImageNet classification task. Initializing the image encoder with a better vision model gives the image captioning system a better ability to recognize different objects in the images, allowing it to generate more detailed and accurate descriptions. This gives an additional 2 points of improvement in the BLEU-4 metric over the system used in the captioning challenge.Another key improvement to the vision component comes from fine-tuning the image model. This step addresses the problem that the image encoder is initialized by a model trained to classify objects in images, whereas the goal of the captioning system is to describe the objects in images using the encodings produced by the image model.  For example, an image classification model will tell you that a dog, grass and a frisbee are in the image, but a natural description should also tell you the color of the grass and how the dog relates to the frisbee.  In the fine-tuning phase, the captioning system is improved by jointly training its vision and language components on human generated captions. This allows the captioning system to transfer information from the image that is specifically useful for generating descriptive captions, but which was not necessary for classifying objects. In particular,  after fine-tuning it becomes better at correctly describing the colors of objects. Importantly, the fine-tuning phase must occur after the language component has already learned to generate captions - otherwise, the noisiness of the randomly initialized language component causes irreversible corruption to the vision component. For more details, read the full paper here.
Left: the better image model allows the captioning model to generate more detailed and accurate descriptions. Right: after fine-tuning the image model, the image captioning system is more likely to describe the colors of objects correctly.
Until recently our image captioning system was implemented in the DistBelief software framework. The TensorFlow implementation released today achieves the same level of accuracy with significantly faster performance: time per training step is just 0.7 seconds in TensorFlow compared to 3 seconds in DistBelief on an Nvidia K20 GPU, meaning that total training time is just 25% of the time previously required.A natural question is whether our captioning system can generate novel descriptions of previously unseen contexts and interactions. The system is trained by showing it hundreds of thousands of images that were captioned manually by humans, and it often re-uses human captions when presented with scenes similar to what it’s seen before.
When the model is presented with scenes similar to what it’s seen before, it will often re-use human generated captions.
So does it really understand the objects and their interactions in each image? Or does it always regurgitate descriptions from the training data? Excitingly, our model does indeed develop the ability to generate accurate new captions when presented with completely new scenes, indicating a deeper understanding of the objects and context in the images. Moreover, it learns how to express that knowledge in natural-sounding English phrases despite receiving no additional language training other than reading the human captions.
 
Our model generates a completely new caption using concepts learned from similar scenes in the training set
We hope that sharing this model in TensorFlow will help push forward image captioning research and applications, and will also allow interested people to learn and have fun. To get started training your own image captioning system, and for more details on the neural network architecture, navigate to the model’s home-page here. While our system uses the Inception V3 image classification model, you could even try training our system with the recently released Inception-ResNet-v2 model to see if it can do even better!
ORIGINAL: Google Blog
by Chris Shallue, Software Engineer, Google Brain Team September 22, 2016

What Are The Differences Between AI, Machine Learning, NLP, And Deep Learning?

(Image: Creative Commons)

What is the difference between AI, Machine Learning, NLP, and Deep Learning? originally appeared on Quora: the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Dmitriy Genzel, PhD in Computer Science, on Quora:
  • AI (Artificial intelligence) is a subfield of computer science that was created in the 1960s, and it was/is concerned with solving tasks that are easy for humans but hard for computers. In particular, a so-called Strong AI would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic and includes all kinds of tasks such as  
    • planning, 
    • moving around in the world, 
    • recognizing objects and sounds, 
    • speaking, 
    • translating, 
    • performing social or business transactions, 
    • creative work (making art or poetry), 
    • etc.
  • NLP (Natural language processing) is simply the part of AI that has to do with language (usually written).
  • Machine learning is concerned with one aspect of this: 
    • given some AI problem that can be described in discrete terms (e.g. out of a particular set of actions, which one is the right one), and 
    • given a lot of information about the world, 
    • figure out what is the “correct” action, without having the programmer program it in. 
    • Typically some outside process is needed to judge whether the action was correct or not. 
    • In mathematical terms, it’s a function: you feed in some input, and you want it to to produce the right output, so the whole problem is simply to build a model of this mathematical function in some automatic way. To draw a distinction with AI, if I can write a very clever program that has human-like behavior, it can be AI, but unless its parameters are automatically learned from data, it’s not machine learning.
  • Deep learning is one kind of machine learning that’s very popular now. It involves a particular kind of mathematical model that can be thought of as a composition of simple blocks (function composition) of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.
Add caption
The word “deep” means that the composition has many of these blocks stacked on top of each other, and the tricky bit is how to adjust the blocks that are far from the output, since a small change there can have very indirect effects on the output. This is done via something called Backpropagation inside of a larger process called Gradient descent which lets you change the parameters in a way that improves your model.

Artificial Intelligence: What is artificial intelligence and why do we need it?
Machine Learning: What is machine learning?
Natural Language Processing: What makes natural language processing difficult?

ORIGINAL: Quora
June 8, 2016

Tardigrade protein helps human DNA withstand radiation.

Tardigrade protein helps human DNA withstand radiation.
Eye of Science/Science Photo Library  Experiments show that the tardigrade’s resilience can be transferred to cultures of human cells.
Water bears are renowned for their ability to withstand extreme conditions.
Tardigrades, or water bears, are pudgy, microscopic animals that look like a cross between a caterpillar and a naked mole rat. These aquatic invertebrates are consummate survivors, capable of withstanding a host of extremes, including near total dehydration and the insults of space.

Now, a paper1 published on 20 September in Nature Communications pinpoints the source of yet another tardigrade superpower: a protective protein that provides resistance to damaging X-rays. And researchers were able to transfer that resistance to human cells.

Tolerance against X-ray is thought to be a side-product of [the] animal's adaption to severe dehydration,” says lead study author Takekazu Kunieda, a molecular biologist at the University of Tokyo. According to Kunieda, severe dehydration wreaks havoc on the molecules in living things. It can even tear apart DNA, much like X-rays can.

Related stories
Nano-suit shields bugs in the void
Spacesuits optional for 'water bears'
Pressure brought to bear

The researchers wanted to know how tardigrades protected themselves against such harsh conditions. So Kunieda and his colleagues began by sequencing the genome of Ramazzottius varieornatus, a species that is particularly stress tolerant. It's easier to study processes within the tardigrade's cells when the animal's genome is inserted into mammalian cells, says Kunieda. So researchers manipulated cultures of human cells to produce pieces of the water bear's inner machinery to determine which parts were actually giving the animals their resistance.

Eventually, Kunieda and his colleagues discovered that a protein known as Dsup prevented the animal's DNA from breaking under the stress of radiation and desiccation. And they also found that the tardigrade-tinged human cells were able to suppress X-ray induced damage by about 40%.

Genomic treasure trove

Protection and repair of DNA is a fundamental component of all cells and a central aspect in many human diseases, including cancer and ageing,” says Ingemar Jönsson, an evolutionary ecologist who studies tardigrades at Kristianstad University in Sweden.

This makes the new paper’s findings “highly interesting for medicine”, says Jönsson. It opens up the possibility of improving the stress resistance of human cells, which could one day benefit people undergoing radiation therapies.

Kunieda adds that these findings may one day protect workers from radiation in nuclear facilities or possibly help us to grow crops in extreme environments, such as the ones found on Mars.

Bob Goldstein, a biologist at the University of North Carolina at Chapel Hill who helped to sequence the genome of another tardigrade species2, says the research is exciting and clever. He also thinks that the study’s authors are correct in predicting that this is probably just the first of many such discoveries.

The tardigrade is resistant to a lot of different kinds of extremes,” says Goldstein. And this means that the animals must have many different ways of protecting themselves.

We are really just at the beginning of exploring the genetic treasure that the tardigrade genome represents,” says Jönsson.

Nature doi:10.1038/nature.2016.20648

References

Hashimoto, T. et al. Nature Commun. http://dx.doi.org/10.1038/ncomms12808 (2016).
Boothby, T. C. et al. Proc. Natl Acad. Sci. USA 112, 15976–15981 (2015).


ORIGINAL: Nature
By Jason Bittel
20 September 2016

 

jueves, 22 de septiembre de 2016

Fusing of Organic Molecules With Graphene Opens Up New Applications

 Photo: TUM A Model Molecule: Prof. Wilhelm Auwärter with a porphin-model.
 The hemoglobin-like molecule called porphyrin, which is responsible for making photosynthesis possible in plants and transporting oxygen in our blood, has been combined with graphene by researchers at the Technical University of Munich (TUM) in a new method that may make possible everything from molecular electronics to improved gas sensors.

While graphene’s properties—ranging from its electrical conductivity to its tensile strength—have made it desirable in a number of electronic applications, it still needs to be combined with so-called functional molecules to make it useful in applications such as photovoltaics and gas sensors. To date, the addition of these other functional molecules has been carried out through “wet chemistry,” which limits the amount of control possible over the properties of the resulting material.

However,  in a method described in the journal Nature Chemistry, the TUM researchers developed a highly controllable “dry” method based on exploiting the catalytic properties of a silver surface on which the graphene layer rested inside an ultra-high vacuum.

The benefit of this technique is that it preserves all the attractive properties of the porphyrins even after being combined with the graphene, most notably their intrinsic ability to have their electronic and magnetic properties tuned by the addition of different metal atoms. In terms of real-world devices, this means that these different metal atoms can bind with gas molecules to create effective gas sensors.

More generally, the method the TUM researchers have developed could be a breakthrough for how graphene is functionalized for a range of electronic applications.

The key to the importance of this research in terms of electronics is the complementary electronic structure in the graphene and the porphyrins,” said Wilhelm Auwärter, a professor at TUM who led the research, in an e-mail interview with IEEE Spectrum. “The porphyrins feature large electronic gaps, in contrast to graphene. The electronic, optical and magnetic properties of the porphyrins can be tuned by the choice of the metal center of the molecule.Electronic band gaps are critical to controlling how conductive a material is, and in turn, whether or not the material can be used in an electronic switch such as a transistor.

Auwärter further explains the electronic and magnetic properties of the porphyrins can also be modified by the attachment of gaseous ligands (like oxygen or nitric monoxide), This would allow, for example turning on and off the material’s mechanical response to a magnetic field. “Such functionalities are not inherent to the pristine graphene,” he added.

Auwärter also said that it should be possible to directly incorporate porphyrins into graphene nanoribbons. “In this way, one could achieve sequences of graphene ‘wires’ and porphyrin units. This should allow the engineering of an electronic gap in the hybrid structures,” he said.

While Auwärter believes that this manufacturing approach provides an avenue that could lead to new device designs for a range of electronic applications, he does concede that this is preliminary research that primarily serves as a starting off point.

We need to apply our protocol to well-defined graphene nanostructures, such as nanoribbons or nanographenes,” said Auwärter. “We need to place the hybrid structures on specific supports or to include them in layered materials and devices.”

In the future, to exploit this method for electronic applications, Auwärter points out that the hybrid material will need to be grown on insulating supports like hexagonal boron nitride.

While the electronic applications may still be somewhat far off, the novel protocol does offer an intriguing way forward for graphene-based electronics.
Learn More Technical University of Munichband gapgas sensorsgraphenegraphene nanoribbonsmolecular electronicsporphyrins

ORIGINAL: IEEE Spectrum
By Dexter Johnson
22 Sep 2016

domingo, 11 de septiembre de 2016

WaveNet: A Generative Model for Raw Audio

WaveNet: A Generative Model for Raw Audio
This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%.

We also demonstrate that the same network can be used to synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.

Talking Machines
Allowing people to converse with machines is a long-standing dream of human-computer interaction. The ability of computers to understand natural speech has been revolutionised in the last few years by the application of deep neural networks (e.g.,Google Voice Search). However, generating speech with computers — a process usually referred to as speech synthesis or text-to-speech (TTS) — is still largely based on so-called concatenative TTS, where a very large database of short speech fragments are recorded from a single speaker and then recombined to form complete utterances. This makes it difficult to modify the voice (for example switching to a different speaker, or altering the emphasis or emotion of their speech) without recording a whole new database.

This has led to a great demand for parametric TTS, where all the information required to generate the data is stored in the parameters of the model, and the contents and characteristics of the speech can be controlled via the inputs to the model. So far, however, parametric TTS has tended to sound less natural than concatenative, at least for syllabic languages such as English. Existing parametric models typically generate audio signals by passing their outputs through signal processing algorithms known asvocoders.

WaveNet changes this paradigm by directly modelling the raw waveform of the audio signal, one sample at a time. As well as yielding more natural-sounding speech, using raw waveforms means that WaveNet can model any kind of audio, including music.

WaveNets




Wave animation

Researchers usually avoid modelling raw audio because it ticks so quickly: typically 16,000 samples per second or more, with important structure at many time-scales. Building a completely autoregressive model, in which the prediction for every one of those samples is influenced by all previous ones (in statistics-speak, each predictive distribution is conditioned on all previous observations), is clearly a challenging task.

However, our PixelRNN and PixelCNN models, published earlier this year, showed that it was possible to generate complex natural images not only one pixel at a time, but one colour-channel at a time, requiring thousands of predictions per image. This inspired us to adapt our two-dimensional PixelNets to a one-dimensional WaveNet.








Architecture animation



The above animation shows how a WaveNet is structured. It is a fully convolutional neural network, where the convolutional layers have various dilation factors that allow its receptive field to grow exponentially with depth and cover thousands of timesteps.At training time, the input sequences are real waveforms recorded from human speakers. After training, we can sample the network to generate synthetic utterances. At each step during sampling a value is drawn from the probability distribution computed by the network. This value is then fed back into the input and a new prediction for the next step is made. Building up samples one step at a time like this is computationally expensive, but we have found it essential for generating complex, realistic-sounding audio.

Improving the State of the Art
We trained WaveNet using some of Google’s TTS datasets so we could evaluate its performance. The following figure shows the quality of WaveNets on a scale from 1 to 5, compared with Google’s current best TTS systems (parametric and concatenative), and with human speech using Mean Opinion Scores (MOS). MOS are a standard measure for subjective sound quality tests, and were obtained in blind tests with human subjects (from over 500 ratings on 100 test sentences). As we can see, WaveNets reduce the gap between the state of the art and human-level performance by over 50% for both US English and Mandarin Chinese.

For both Chinese and English, Google’s current TTS systems are considered among the best worldwide, so improving on both with a single model is a major achievement.










Here are some samples from all three systems so you can listen and compare yourself:

US English:



Mandarin Chinese:



Knowing What to Say
In order to use WaveNet to turn text into speech, we have to tell it what the text is. We do this by transforming the text into a sequence of linguistic and phonetic features (which contain information about the current phoneme, syllable, word, etc.) and by feeding it into WaveNet. This means the network’s predictions are conditioned not only on the previous audio samples, but also on the text we want it to say.

If we train the network without the text sequence, it still generates speech, but now it has to make up what to say. As you can hear from the samples below, this results in a kind of babbling, where real words are interspersed with made-up word-like sounds:





Notice that non-speech sounds, such as breathing and mouth movements, are also sometimes generated by WaveNet; this reflects the greater flexibility of a raw-audio model.

As you can hear from these samples, a single WaveNet is able to learn the characteristics of many different voices, male and female. To make sure it knew which voice to use for any given utterance, we conditioned the network on the identity of the speaker. Interestingly, we found that training on many speakers made it better at modelling a single speaker than training on that speaker alone, suggesting a form of transfer learning.

By changing the speaker identity, we can use WaveNet to say the same thing in different voices:

Similarly, we could provide additional inputs to the model, such as emotions or accents, to make the speech even more diverse and interesting.

Making Music
Since WaveNets can be used to model any audio signal, we thought it would also be fun to try to generate music. Unlike the TTS experiments, we didn’t condition the networks on an input sequence telling it what to play (such as a musical score); instead, we simply let it generate whatever it wanted to. When we trained it on a dataset of classical piano music, it produced fascinating samples like the ones below:




WaveNets open up a lot of possibilities for TTS, music generation and audio modelling in general. The fact that directly generating timestep per timestep with deep neural networks works at all for 16kHz audio is really surprising, let alone that it outperforms state-of-the-art TTS systems. We are excited to see what we can do with them next.

For more details, take a look at our paper.

ORIGINAL: Google DeepMind
Aäron van den Oord. Research Scientist, DeepMind
Heiga Zen. Research Scientist, Google
Sander Dieleman. Research Scientist, DeepMind
8 September 2016



© 2016 DeepMind Technologies Limited