Mostrando entradas con la etiqueta U of Wyoming. Mostrar todas las entradas
Mostrando entradas con la etiqueta U of Wyoming. Mostrar todas las entradas

jueves, 2 de junio de 2016

See The Difference One Year Makes In Artificial Intelligence Research

AN IMPROVED WAY OF LEARNING ABOUT NEURAL NETWORKS

Google/ Geometric IntelligenceThe difference between Google's generated images of 2015, and the images generated in 2016.


Last June, Google wrote that it was teaching its artificial intelligence algorithms to generate images of objects, or "dream." The A.I. tried to generate pictures of things it had seen before, like dumbbells. But it ran into a few problems. It was able to successfully make objects shaped like dumbbells, but each had disembodied arms sticking out from the handles, because arms and dumbbells were closely associated. Over the course of a year, this process has become incredibly refined, meaning these algorithms are learning much more complete ideas about the world.

New research shows that even when trained on a standardized set of images,, A.I. can generate increasingly realistic images of objects that it's seen before. Through this, the researchers were also able to sequence the images and make low-resolution videos of actions like skydiving and playing violin. The paper, from the University of Wyoming, Albert Ludwigs University of Freiburg, and Geometric Intelligence, focuses on deep generator networks, which not only create these images but are able to show how each neuron in the network affects the entire system's understanding.

Looking at generated images from a model is important because it gives researchers a better idea about how their models process data. It's a way to take a look under the hood of algorithms that usually act independent of human intervention as they work. By seeing what computation each neuron in the network does, they can tweak the structure to be faster or more accurate.

"With real images, it is unclear which of their features a neuron has learned," the team wrote. "For example, if a neuron is activated by a picture of a lawn mower on grass, it is unclear if it ‘cares about’ the grass, but if an image...contains grass, we can be more confident the neuron has learned to pay attention to that context."

They're researching their research—and this gives a valuable tool to continue doing so.

Screenshot
Take a look at some other examples of images the A.I. was able to produce.

ORIGINAL: Popular Science
May 31, 2016

domingo, 3 de febrero de 2013

Engineers solve a biological mystery and boost artificial intelligence

ORIGINAL: Science Blog

By simulating 25,000 generations of evolution within computers, Cornell University engineering and robotics researchers have discovered why biological networks tend to be organized as modules – a finding that will lead to a deeper understanding of the evolution of complexity. (Proceedings of the Royal Society, Jan. 30, 2013.)

The new insight also will help evolve artificial intelligence, so robot brains can acquire the grace and cunning of animals. From brains to gene regulatory networks, many biological entities are organized into modules – dense clusters of interconnected parts within a complex network. For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place. Renowned biologists Richard Dawkins, Günter P. Wagner, and the late Stephen Jay Gould identified the question of modularity as central to the debate over “the evolution of complexity.”

For years, the prevailing assumption was simply that modules evolved because entities that were modular could respond to change more quickly, and therefore had an adaptive advantage over their non-modular competitors. But that may not be enough to explain the origin of the phenomena.

The team discovered that evolution produces modules not because they produce more adaptable designs, but because modular designs have fewer and shorter network connections, which are costly to build and maintain. As it turned out, it was enough to include a “cost of wiring” to make evolution favor modular architectures.

Fig. 1. Main hypothesis. Evolving networks with selection for performance alone produces non-modular networks that are slow to adapt to new environments. Adding a selective pressure to minimize connection costs leads to the evolution of modular networks that quickly adapt to new environments. Source: http://arxiv.org/abs/1207.2743

This theory is detailed in “The Evolutionary Origins of Modularity,” published today in the Proceedings of the Royal Society by Hod Lipson, Cornell associate professor of mechanical and aerospace engineering; Jean-Baptiste Mouret, a robotics and computer science professor at Université Pierre et Marie Curie in Paris; and by Jeff Clune, a former visiting scientist at Cornell and currently an assistant professor of computer science at the University of Wyoming.

To test the theory, the researchers simulated the evolution of networks with and without a cost for network connections.

Once you add a cost for network connections, modules immediately appear. Without a cost, modules never form. The effect is quite dramatic,” says Clune.

The results may help explain the near-universal presence of modularity in biological networks as diverse as neural networks – such as animal brains – and vascular networks, gene regulatory networks, protein-protein interaction networks, metabolic networks and even human-constructed networks such as the Internet.

Being able to evolve modularity will let us create more complex, sophisticated computational brains,” says Clune.

Says Lipson: “We’ve had various attempts to try to crack the modularity question in lots of different ways. This one by far is the simplest and most elegant.