jueves, 28 de abril de 2016

OpenAI Gym Beta





We're releasing the public beta of OpenAI Gym, a toolkit for developing and comparingreinforcement learning (RL) algorithms. It consists of a growing suite of environments (fromsimulated robots to Atari games), and a site for comparing and reproducing results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflowand Theano. The environments are written in Python, but we'll soon make them easy to use from any language.

We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.

Getting started
If you'd like to dive in right away, you can work through our tutorial. You can also help out while learning by reproducing a result.

Why RL?
Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment. It's exciting for two reasons:
  1. RL is very general, encompassing all problems that involve making a sequence of decisions: for example, controlling a robot's motors so that it's able to run and jump, making business decisions like pricing and inventory management, or playing video games and board games. RL can even be applied to supervised learning problems with sequential or structured outputs.
  2. RL algorithms have started to achieve good results in many difficult environments. RL has a long history, but until recent advances in deep learning, it required lots of problem-specific engineering. DeepMind's Atari results, BRETT from Pieter Abbeel's group, and AlphaGo all used deep RL algorithms which did not make too many assumptions about their environment, and thus can be applied in other settings.
However, RL research is also slowed down by two factors:
  1. The need for better benchmarks. In supervised learning, progress has been driven by large labeled datasets like ImageNet. In RL, the closest equivalent would be a large and diverse collection of environments. However, the existing open-source collections of RL environments don't have enough variety, and they are often difficult to even set up and use.
  2. Lack of standardization of environments used in publications. Subtle differences in the problem definition, such as the reward function or the set of actions, can drastically alter a task's difficulty. This issue makes it difficult to reproduce published research and compare results from different papers.
OpenAI Gym is an attempt to fix both problems.

The Environments
OpenAI Gym provides a diverse suite of environments that range from easy to difficult and involve many different kinds of data. We're starting out with the following collections:
  • Classic control and toy text: complete small-scale tasks, mostly from the RL literature. They're here to get you started.
  • Algorithmic: perform computations such as adding multi-digit numbers and reversing sequences. One might object that these tasks are easy for a computer. The challenge is to learn these algorithms purely from examples. These tasks have the nice property that it's easy to vary the difficulty by varying the sequence length.
  • Atari: play classic Atari games. We've integrated the Arcade Learning Environment (which has had a big impact on reinforcement learning research) in an easy-to-install form.
  • Board games: play Go on 9x9 and 19x19 boards. Two-player games are fundamentally different than the other settings we've included, because there is an adversary playing against you. In our initial release, there is a fixed opponent provided by Pachi, and we may add other opponents later (patches welcome!). We'll also likely expand OpenAI Gym to have first-class support for multi-player games.
  • 2D and 3D robots: control a robot in simulation. These tasks use the MuJoCo physics engine, which was designed for fast and accurate robot simulation. Included are some environments from a recent benchmark by UC Berkeley researchers (who incidentally will be joining us this summer). MuJoCo is proprietary software, but offers free trial licenses.
Over time, we plan to greatly expand this collection of environments. Contributions from the community are more than welcome.

Each environment has a version number (such as Hopper-v0). If we need to change an environment, we'll bump the version number, defining an entirely new task. This ensures that results on a particular environment are always comparable.

Evaluations
We've made it easy to upload results to OpenAI Gym. However, we've opted not to create traditional leaderboards. What matters for research isn't your score (it's possible to overfit or hand-craft solutions to particular tasks), but instead the generality of your technique.

We're starting out by maintaing a curated list of contributions that say something interesting about algorithmic capabilities. Long-term, we want this curation to be a community effort rather than something owned by us. We'll necessarily have to figure out the details over time, and we'd would love your help in doing so.


We want OpenAI Gym to be a community effort from the beginning. We've starting working with partners to put together resources around OpenAI Gym:
During the public beta, we're looking for feedback on how to make this into an even better tool for research. If you'd like to help, you can try your hand at improving the state-of-the-art on each environment, reproducing other people's results, or even implementing your own environments. Also please join us in the community chat!

ORIGINAL: OpenAI
by Greg Brockman and John Schulman
April 27, 2016

domingo, 24 de abril de 2016

Theranos Is Subject of Criminal Probe by U.S.

Federal prosecutors are investigating whether the blood-testing company misled investors about the state of its technology and operations

Elizabeth Holmes, founder and CEO of Theranos, in October. PHOTO: NIKKI RITCHER/THE WALL STREET JOURNAL
Federal prosecutors have launched a criminal investigation into whether Theranos Inc. misled investors about the state of its technology and operations, according to people familiar with the matter.

Walgreens Boots Alliance Inc. and the New York State Department of Health have received subpoenas in recent weeks seeking documents and testimony about representations made to them by the Palo Alto, Calif., blood-testing company, some of the people said.

Walgreens has been Theranos’s main conduit to consumers since the companies announced a partnership in 2013 that now includes 40 Theranos wellness centers at drugstores in Arizona. The New York agency received an application from Theranos for a laboratory license in the state.

People familiar with the matter said the subpoenas seek broad information about how Theranos described its technologies and the progress it was making developing those technologies.


Investigators are also examining whether Theranos misled government officials, which can be a crime under federal law, some of the people said.

Such subpoenas don’t necessarily mean prosecutors are actively seeking an indictment. People familiar with the matter said the investigation is at an early stage.

In addition to the criminal probe, the Securities and Exchange Commission is examining whether Theranos made deceptive statements to investors when it solicited funding, according to people familiar with the matter. Theranos was valued at $9 billion in a funding round in 2014 and the majority stake of Elizabeth Holmes, the startup’s founder and chief executive, at more than half that.

In a statement, Theranos said: “The company continues to work closely with regulators and is cooperating fully with all investigations.

SEC spokeswoman Judith Burns declined to comment, as did Justice Department spokesman Peter Carr and Abraham Simmons, an assistant U.S. attorney in San Francisco, where the federal investigation is being conducted.

Companies valued at $1 billion or more by venture-capital firms




Walgreens spokesman Michael Polzin also declined to comment. New York health-department spokesman J.P. O’Hare didn’t respond to requests for comment.

Since launching Theranos in 2003, Ms. Holmes has set out to revolutionize the blood-testing industry. Before the company made changes to its website earlier this year, the website cited “breakthrough advancements” that made it possible to run “the full range” of lab tests on a few drops of blood pricked from a finger.

In October, The Wall Street Journal reported that Theranos did the vast majority of more than 200 tests it offered to consumers on traditional lab machines purchased from other companies. The Journal also reported that some former employees doubted the accuracy of a small number of tests run on the devices Theranos invented, code-named Edison.

Theranos has declined to say how many tests or which ones it runs on commercial machines. The company has said its technology has the capability to handle a broad range of tests.

Federal officials began requesting information about Theranos in January and February, according to the people familiar with the matter. Those informal requests were followed by grand-jury subpoenas from a federal court in San Francisco in March, the people said. Agents from the Federal Bureau of Investigation and U.S. Postal Inspection Service are assisting in the investigation, the people said.

The news release issued when the Walgreens deal was announced said consumers “will be able to access less invasive and more affordable clinician-directed lab testing, from a blood sample as small as a few drops, or 1/1,000 the size of a typical blood draw.

As part of the deal, Walgreens has invested at least $50 million into Theranos, according to people familiar with the matter.

In January, though, Walgreens notified Theranos that it intends to terminate the partnership unless the company quickly fixes problems found in a federal inspection completed in November at Theranos’s lab in Newark, Calif.

Last month, federal health regulators proposed banning Ms. Holmes from the blood-testing business for at least two years after concluding that the company failed to resolve what officials have called major problems found during the inspection.

Related Video

U.S. health regulators have proposed banning Theranos founder Elizabeth Holmes and another executive for not doing enough to fix problems at the blood-testing startup. WSJ's Christopher Weaver joins Tanya Rivero to discuss. Photo: AP

Theranos spokeswoman Brooke Buchanan said the company has submitted a response addressing the concerns and hopes to avert the sanctions. The sanctions haven’t been imposed. If they are, Theranos can appeal.

The company began running some tests on Edisons in its California lab in late 2013, according to some former employees and the federal inspection report.

Theranos’s lab-license application in New York said the company planned to test patients’ blood on traditional lab machines and didn’t mention any proprietary testing devices, said someone with knowledge of the application.

Theranos also enrolled in the New York agency’s proficiency-testing program, in which regulators monitor a lab’s accuracy by sending it samples of preserved blood with known characteristics and asking the lab to test them.

If the lab’s results are in line with those reported by a peer group, it receives a passing grade.

In March 2014, a Theranos employee alleged to the agency in an email that Theranos was manipulating its proficiency-testing program by reporting back results obtained from traditional lab machines for some tests, instead of the Edison devices with which it was running those tests on live patient samples.

Theranos said it uses an alternative process for proficiency testing. The process “has been disclosed to and discussed with regulators,” said Ms. Buchanan, the Theranos spokeswoman. “Theranos’ proficiency testing process meets the regulatory requirements.

State records show Theranos never obtained a New York license. The person with knowledge of the company’s application said it was shelved when Theranos’s lab director at the time wrote to the agency to inform it he had resigned and wanted his name taken off the application.

The SEC has been paying closer attention recently to ensuring that large private technology firms properly inform investors about their finances and valuations. In a speech at Stanford University late last month, SEC Chairwoman Mary Jo White said: “The risk of distortion and inaccuracy is amplified because start-up companies, even quite mature ones, often have far less robust internal controls and governance procedures than most public companies.

—Jean Eaglesham contributed to this article.

Write to Christopher Weaver at christopher.weaver@wsj.com, John Carreyrou atjohn.carreyrou@wsj.com and Michael Siconolfi at michael.siconolfi@wsj.com

ORIGINAL: WSJ
By CHRISTOPHER WEAVER, JOHN CARREYROU and MICHAEL SICONOLFI
April 18, 2016 6:37 p.m. ET

These trees in Ecuador are reportedly 'walking' up to 20 metres per year, (but...)

Ruestz
The Ents are going to war (maybe).

They might lack the ability to launch a war against the forces of Isengard, or converge for a meeting about what to do about some pesky hobbits, but a species of tree in a remote section of Ecuador can, reportedly, walk.

You heard that right, there’s a very real species of palm tree, Socratea exorrhiza, that can grow new roots to 'sidestep' its way to better soil. And not just a little sidestep, either. These mobile trees can travel about 20 metres a year, according to Karl Gruber from the BBC. Or, that's how the legend goes, but is it true?

It seems like a pretty simple question to answer, but it's so much more complicated than you might expect. Some reports, like Gruber’s, claim that the famed trees 'walk' by sprouting new roots, which allows them to sort of sidestep their way very slowly through the forest. 

But according to a 2005 paper by biologist Gerardo Avalos, the trees, which do produce new roots on occasion, stay firmly planted in one space. Just because they sprout new roots, doesn't mean they use them to move around.

"My paper proves that the belief of the walking palm is just a myth," Avalos told Live Science’s Benjamin Radford. "Thinking that a palm tree could actually track canopy light changes by moving slowly over the forest floor … is a myth that tourist guides find amusing to tell visitors to the rainforest."

So why all the confusion? It all seems to stem (sorry) from the tree’s unique root system.

Unlike other trees that have roots fully hidden underground, the walking palm trees have a higher root system that starts near the bottom of their trunks. This leaves the trees looking more like an upright broom than an actual tree. Over time, as soil erodes, some of these roots die off, and new roots form.

So the question is: do these new roots eventually shift the tree’s location? All signs sadly point to no

The walking palm, as cool as it sounds, is probably something cooked up by tour guides to add a bit of spice to their lecturing - a conclusion that is furthered by the fact that, if you do a quick search, there aren’t any time-lapse videos of one of these trees 'walking', but a tonne of videos of people saying they do.

While it's disappointing for those of us who really want to believe in the idea that a tree can have some form of mobility other than growing towards the light, there are plenty of plants that do, in fact, move. Take the flytrap, that eats small insects by chomping down on them, or Mimosa pubica - also known as the 'sensitive plant' - that recoils at a touch.

So until someone can either document the walking trees on the move or publish a paper describing them, we have to follow the evidence that says they stay put.

ORIGINAL: Science Alert
By JOSH HRALA
23 APR 2016

jueves, 21 de abril de 2016

Meet the Nanomachines That Could Drive a Medical Revolution


A group of physicists recently built the smallest engine ever created from just a single atom. Like any other engine it converts heat energy into movement — but it does so on a smaller scale than ever seen before. The atom is trapped in a cone of electromagnetic energy and lasers are used to heat it up and cool it down, which causes the atom to move back and forth in the cone like an engine piston.

The scientists from the University of Mainz in Germany who are behind the invention don’t have a particular use in mind for the engine. But it’s a good illustration of how we are increasingly able to replicate the everyday machines we rely on at a tiny scale. This is opening the way for some exciting possibilities in the future, particularly in the use of nanorobots in medicine, that could be sent into the body to release targeted drugs or even fight diseases such as cancer.

Nanotechnology deals with ultra-small objects equivalent to one billionth of a meter in size, which sounds an impossibly tiny scale at which to build machines. But size is relative to how close you are to an object. We can’t see things at the nanoscale with the naked eye, just as we can’t see the outer planets of the solar system. Yet if we zoom in — with a telescope for the planets or a powerful electron microscope for nano-objects — then we change the frame of reference and things look very different.

However, even after getting a closer look, we still can’t build machines at the nanoscale using conventional engineering tools. While regular machines, such as the internal combustion engines in most cars, operate according to the rules of physics laid out by Isaac Newton, things at the nanoscale follow the more complex laws of quantum mechanics. So we need different tools that take into account the quantum world in order to manipulate atoms and molecules in a way that uses them as building blocks for nanomachines. Here are four more tiny machines that could have a big impact.

1- Graphene engine for nanorobots
Researchers from Singapore have recently demonstrated a simple but nano-sized engine made from a highly elastic piece of graphene. Graphene is a two-dimensional sheet of carbon atoms that has exceptional mechanical strength. Inserting some chlorine and fluorine molecules into the graphene lattice and firing a laser at it causes the sheet to expand. Rapidly turning the laser on and off makes the graphene pump back and forth like the piston in an internal combustion engine.

The researchers think the graphene nano-engine could be used to power tiny robots, for example to attack cancer cells in the body. Or it could be used in a so-called “lab-on-a-chip” — a device that shrinks the functions of a chemistry lab into tiny package that can be used for rapid blood tests, among other things.

2- Frictionless nano-rotor
Molecular motor.
Image credit: 
Palma, C.-A.; Kühne, D.; Klappenberger, F.; Barth, J.V.; Technische Universität München


The rotors that produce movement in machines such as aircraft engines and fans all usually suffer from friction, which limits their performance. Nanotechnology can be used to create a motor from a single molecule, which can rotate without any friction. Normal rotors interact with the air according to Newton’s laws as they spin round and so experience friction. But, at the nanoscale, molecular rotors follow quantum law, meaning they don’t interact with the air in the same way and so friction doesn’t affect their performance.

Nature has actually already shown us that molecular motors are possible. Certain proteins can travel along a surface using a rotating mechanism that create movement from chemical energy. These motor proteins are what cause cells to contract and so are responsible for our muscle movements.

Researchers from Germany recently reported creating a molecular rotor by placing moving molecules inside a tiny hexagonal hole known as a nanopore in a thin piece of silver. The position and movement of the molecules meant they began to rotate around the hole like a rotor. Again, this form of nano-engine could be used to power a tiny robot around the body.

3- Controllable nano-rockets


A rocket is the fastest man-made vehicle that can freely travel across the universe. Several groups of researchers have recently constructed a high-speed, remote-controlled nanoscale version of a rocket by combining nanoparticles with biological molecules.

In one case, the body of the rocket was made from a polystyrene bead covered in gold and chromium. This was attached to multiple “catalytic engine” molecules using strands of DNA. When placed in a solution of hydrogen peroxide, the engine molecules caused a chemical reaction that produced oxygen bubbles, forcing the rocket to move in the opposite direction. Shining a beam of ultra-violet light on one side of the rocket causes the DNA to break apart, detaching the engines and changing the rocket’s direction of travel. The researchers hope to develop the rocket so it can be used in any environment, for example to deliver drugs to a target area of the body.

4- Magnetic nano-vehicles for carrying drugs

Magnetic nanoparticles. Image credit: Tapas Sen, author provided
My own research group is among those working on a simpler way to carry drugs through the body that is already being explored with magnetic nanoparticles. Drugs are injected into a magnetic shell structure that can expand in the presence of heat or light. This means that, once inserted into the body, they can be guided to the target area using magnets and then activated to expand and release their drug.

The technology is also being studied for medical imaging. Creating the nanoparticles to gather in certain tissues and then scanning the body with a magnetic resonance imaging (MRI) could help highlight problems such as diabetes.

Tapas Sen, Reader in Nanomaterials Chemistry, University of Central Lancashire

This article was originally published on The Conversation. Read the original article.

ORIGINAL: Singularity Hub

lunes, 11 de abril de 2016

Light-activated communication in synthetic tissues

Light-activated communication in synthetic tissues

Michael J. Booth*, Vanessa Restrepo Schild, Alexander D. Graham, Sam N. Olof and Hagan Bayley

Chemistry Research Laboratory, University of Oxford, Oxford OX1 3TA, UK.
*Corresponding author. E-mail: michael.booth@chem.ox.ac.uk
Science Advances 01 Apr 2016:
Vol. 2, no. 4, e1600056
DOI: 10.1126/sciadv.1600056

Abstract
We have previously used three-dimensional (3D) printing to prepare tissue-like materials in which picoliter aqueous compartments are separated by lipid bilayers. These printed droplets are elaborated into synthetic cells by using a tightly regulated in vitro transcription/translation system. A light-activated DNA promoter has been developed that can be used to turn on the expression of any gene within the synthetic cells. We used light activation to express protein pores in 3D-printed patterns within synthetic tissues. The pores are incorporated into specific bilayer interfaces and thereby mediate rapid, directional electrical communication between subsets of cells. Accordingly, we have developed a functional mimic of neuronal transmission that can be controlled in a precise way.

INTRODUCTION
Cell-free expression systems have been widely used in synthetic biology to create systems that can express functional proteins in a minimal cell-like environment (13). These systems have been used for in vitro selection and evolution of proteins (47) and for control of mammalian (8) and bacterial cells (9). This previous research was performed by encapsulating the cell-free expression system in a single lipid bilayer–bounded compartment, a synthetic cell. No systems have been created where multiple soft encapsulated synthetic cells can communicate with each other, although patterned two-dimensional (2D) solid-state microfluidic chambers containing cell-free expression medium can communicate through diffusion (10). Furthermore, no light-based method has been developed that can control protein expression inside synthetic cells. Here, we have created 3D synthetic tissues made up of hundreds of synthetic cells, using a water-in-oil droplet 3D printer (11). Additionally, we have developed a tightly regulated light-activated DNA (LA-DNA) promoter. By using these technologies in combination, light-activated electrical communication through the synthetic tissues has been achieved by expressing a transmembrane pore, α-hemolysin (αHL), in a subset of the synthetic cells, 3D-printed to form a conductive pathway that is a functional mimic of neuronal transmission.

RESULTS
Light-activated transcription and/or translation has been achieved previously; however, these systems either do not fully repress transcription in the off state (12) or cannot be encapsulated inside synthetic cells due to bursting of the lipid membranes (13, 14). We sought to develop an efficient light-activated T7 promoter (LA-T7 promoter) that could be placed upstream of any gene of interest so that no expression would occur until the DNA was illuminated (Fig. 1A). A complete off state is required so that no protein expression, and therefore no function, is observed without activation. To achieve this, C6-amino-dT–modified bases were incorporated across a single-stranded T7 promoter DNA sequence (Fig. 1B). A photocleavable (PC) biotin N-hydroxysuccinimide ester linker (15) was coupled to all the amines (fig. S1) so that, when the modified DNA was used as a polymerase chain reaction (PCR) primer with a gene of interest, PC biotin moieties would protrude from the major groove at the T7 polymerase binding site (Fig. 1A). The PC group was 2-nitrobenzyl, which allows rapid and efficient cleavage back to the original primary amine (15) to leave minimal scarring of the T7 promoter and allow similar expression to an unmodified T7 promoter. The LA-DNA was created by the addition of monovalent streptavidin (16) to the double-stranded DNA PCR product so that each biotin in the T7 promoter bound to a single monovalent streptavidin molecule (fig. S2). As a fully “photocleaved” control, we used amine-only DNA in which only the C6-amino groups are present in the T7 promoter and therefore does not bind streptavidin. We observed rapid and efficient photocleavage of the monovalent streptavidin with the biotin and linker group from the LA-DNA under a 365-nm ultraviolet (UV) light, as measured by gel electrophoresis (fig. S2). No binding of streptavidin was observed for the amine-only DNA (fig. S2).

Fig. 1 Construction and evaluation of a light-activated promoter.
(A) T7 RNA polymerase is blocked from binding to the LA-T7 promoter due to the presence of multiple monovalent streptavidins, bound to the DNA through biotinylated PC linkers. Following UV light cleavage of the linkers, T7 RNA polymerase can transcribe the downstream gene. (B) LA-T7 promoter sequence. Pink-colored thymines are replaced with amino-C6-dT modifications and the primary amines of the nucleobase coupled to the PC biotin group. (C) LA-DNA encoding for mVenus is only expressed upon UV irradiation. There is no significant difference between expression from the LA-DNA (+UV) and expression from the amine-only DNA construct. a.u., arbitary unit.

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker

Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.

The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.

The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.

To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.

We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.

Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.

But the team had a solid place to start.

The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.

At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.

Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.

This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.

The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.

It’s a chaotic black box, laughed Berger.

With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?

Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.

Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.

They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.

The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.

Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.

Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.

Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.

A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.

Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.

Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.

The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.

Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.

It’s not perfect, said Berger, but it’s a good start.

Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.

We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.

Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.

It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.

That said, the data is convincing,” McHugh acknowledged.

Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.

But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?

Berger acknowledges that it’s a possibility, but he remains hopeful.

I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.

The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.

ORIGINAL: Singularity Hub

Scientists have figured out how to make their own molecules from scratch

ETH Zurich/Lucio Isa
And the possibilities are huge.
A new technique for making artificial molecules in the lab has been developed by researchers in Switzerland, and it opens up the possibility of new micro-robots and other microscopic structures being produced for a specific task - such as delivering medicine into targeted areas in the body - in a way that closely mimics the body's natural processes.

"So far, no scientist has succeeded in fully controlling the sequence of individual components when producing artificial molecules on the micro-scale," said lead researcher, Lucio Isa from ETH Zurich.

The process to create these artificial molecules starts with microspheres made of polymer or silica - 1 micrometre in diameter - which are placed side-by-side in tiny indentations and engraved in polymer templates. The indentations set the form of the finished object and heat is then used to bond the spheres together.

The team says this new procedure is more effective than other micro-3D printing technologies because it enables scientists to build single micro-structures from multiple materials. This means they can arrange artificial molecules in the sequence and with the geometry of their choosing.



It also enables them to precisely define magnetic, non-magnetic, and differently charged areas. Small rods, tiny triangles, and basic 3D objects can be created, though the research team wants to expand the capabilities of the process further.

The researchers say the practical benefits of the process could range from
  • self-propelled micro-carriers that move in an external electric field and micro-mixers for lab-on-a-chip applications, to (eventually) 
  • micro-robots for biomedical applications which are able to grab, transport, and release other specific micro-objects.
They also think their new technique could be used to assemble larger 'superstructures' for use in areas such as photonics (light-based signal processing).

Thanks to the level of control the team has managed to develop, the process is extremely versatile. "In principle, our method can be adapted to any material, even metals," said Isa.

"The full programmability of our approach opens up new directions not only for assembling and studying complex materials with single-particle-level control but also for fabricating new microscale devices for sensing, patterning, and delivery applications," concludes the study, which has been published in the journal Science Advances.

ORIGINAL: Science Alert
DAVID NIELD
8 APR 2016

viernes, 8 de abril de 2016

Next Rembrandt

01 GATHERING THE DATA
To distill the artistic DNA of Rembrandt, an extensive database of his paintings was built and analyzed, pixel by pixel.

FUN FACT:
150 Gigabytes of digitally rendered graphics

BUILDING AN EXTENSIVE POOL OF DATA
t’s been almost four centuries since the world lost the talent of one its most influential classical painters, Rembrandt van Rijn. To bring him back, we distilled the artistic DNA from his work and used it to create The Next Rembrandt.

We examined the entire collection of Rembrandt’s work, studying the contents of his paintings pixel by pixel. To get this data, we analyzed a broad range of materials like high resolution 3D scans and digital files, which were upscaled by deep learning algorithms to maximize resolution and quality. This extensive database was then used as the foundation for creating The Next Rembrandt.

Data is used by many people today to help them be more efficient and knowledgeable about their daily work, and about the decisions they need to make. But in this project it’s also used to make life itself more beautiful. It really touches the human soul.
– Ron Augustus, Microsoft

02 DETERMINING THE SUBJECT
Data from Rembrandt’s body of work showed the way to the subject of the new painting.

FUN FACT:
346 Paintings were studied


DELVING INTO REMBRANDT VAN RIJN
  • 49% FEMALE
  • 51% MALE
Throughout his life, Rembrandt painted a great number of self-portraits, commissioned portraits and group shots, Biblical scenes, and even a few landscapes. He’s known for painting brutally honest and unforgiving portrayals of his subjects, utilizing a limited color palette for facial emphasis, and innovating the use of light and shadows.

“There’s a lot of Rembrandt data available — you have this enormous amount of technical data from all these paintings from various collections. And can we actually create something out of it that looks like Rembrandt? That’s an appealing question.”
- Joris Dik, Technical University Delft
BREAKING DOWN THE DEMOGRAPHICS IN REMBRANDT’S WORK
To create new artwork using data from Rembrandt’s paintings, we had to maximize the data pool from which to pull information. Because he painted more portraits than any other subject, we narrowed down our exploration to these paintings.

Then we found the period in which the majority of these paintings were created: between 1632 and 1642. Next, we defined the demographic segmentation of the people in these works and saw which elements occurred in the largest sample of paintings. We funneled down that selection starting with gender and then went on to analyze everything from age and head direction, to the amount of facial hair present.

After studying the demographics, the data lead us to a conclusive subject: a portrait of a Caucasian male with facial hair, between the ages of thirty and forty, wearing black clothes with a white collar and a hat, facing to the right.

03 GENERATING THE FEATURES
A software system was designed to understand Rembrandt’s style and generate new features.

FUN FACT:
500+ Hours of rendering

MASTERING THE STYLE OF REMBRANDT
In creating the new painting, it was imperative to stay accurate to Rembrandt’s unique style. As “The Master of Light and Shadow,” Rembrandt relied on his innovative use of lighting to shape the features in his paintings. By using very concentrated light sources, he essentially created a “spotlight effect” that gave great attention to the lit elements and left the rest of the painting shrouded in shadows. This resulted in some of the features being very sharp and in focus and others becoming soft and almost blurry, an effect that had to be replicated in the new artwork.

When you want to make a new painting you have some idea of how it’s going to look. But in our case we started from basically nothing — we had to create a whole painting using just data from Rembrandt’s paintings.
- Ben Haanstra, Developer
GENERATING FEATURES BASED ON DATA
To master his style, we designed a software system that could understand Rembrandt based on his use of geometry, composition, and painting materials. A facial recognition algorithm identified and classified the most typical geometric patterns used by Rembrandt to paint human features. It then used the learned principles to replicate the style and generate new facial features for our painting.

CONSTRUCTING A FACE OUT OF THE NEW FEATURES
Once we generated the individual features, we had to assemble them into a fully formed face and bust according to Rembrandt’s use of proportions. An algorithm measured the distances between the facial features in Rembrandt’s paintings and calculated them based on percentages. Next, the features were transformed, rotated, and scaled, then accurately placed within the frame of the face. Finally, we rendered the light based on gathered data in order to cast authentic shadows on each feature.




04 BRINGING IT TO LIFE
CREATING ACCURATE DEPTH AND TEXTURE

Analyses
We now had a digital file true to Rembrandt’s style in content, shapes, and lighting. But paintings aren’t just 2D — they have a remarkable three-dimensionality that comes from brushstrokes and layers of paint. To recreate this texture, we had to study 3D scans of Rembrandt’s paintings and analyze the intricate layers on top of the canvas.


“We looked at a number of Rembrandt paintings, and we scanned their surface texture, their elemental composition, and what kinds of pigments were used. That’s the kind of information you need if you want to generate a painting by Rembrandt virtually.”
- Joris Dik, Technical University Delft
USING A HEIGHT MAP TO PRINT IN 3D
We created a height map using two different algorithms that found texture patterns of canvas surfaces and layers of paint. That information was transformed into height data, allowing us to mimic the brushstrokes used by Rembrandt.

We then used an elevated printing technique on a 3D printer that output multiple layers of paint-based UV ink. The final height map determined how much ink was released onto the canvas during each layer of the printing process. In the end, we printed thirteen layers of ink, one on top of the other, to create a painting texture true to Rembrandt’s style.









ORIGINAL: Next Rembrandt