miércoles, 30 de abril de 2014

Los maestros siguen pensando que son dueños del conocimiento: Rodolfo Llinás

El neurocientífico colombiano presentó los principios que deberían orientar la educación en cada salón de clase.


"La escuela nos enseña la ubicación geográfica de los ríos, pero jamás nos explica la importancia del agua. Sabemos dónde queda Caquetá, aprendemos de memoria los nombres de las ciudades capitales y sabemos ubicar a Mesopotamia en el mapamundi. Somos un baúl repleto de contenidos, pero vacío de contexto. De ahí nuestra dificultad para aplicar el conocimiento en la realidad". Así lo manifestó Rodolfo Llinás en el marco de la Cumbre Líderes por la Educación, un evento organizado por la revista Semana y llevado a cabo este martes en el Centro Cultural del Gimnasio Moderno.

Durante su conferencia, el neurocientífico señaló que la educación escolar en Colombia tiene varios retos por asumir. Por un lado, dijo, "la educación debe ser lo más personalizada posible (…) Resulta impostergable transformar los métodos educativos. La educación, en todos sus niveles, debe considerar que no todos los estudiantes son iguales, que no todos caben en el mismo molde y que, por tanto, debe ajustarse a las características individuales de cada alumno".

Por otra parte, Llinás se refirió a la actitud de los maestros respecto a sus estudiantes. "Parece mentira, pero a estas alturas los maestros continúan pensando que son los dueños del conocimiento", aseveró. "Los profesores deben ser una guía, su labor no consiste en dar instrucciones sino en comprender a cada alumno es sus particularidades para brindarle una orientación adecuada", agregó y llamó la atención sobre "el respeto y el cariño", valores en los que cree deberían fundarse las relaciones profesor-estudiante.

Así mismo, resaltó la necesidad de incluir en los programas académicos escolares materias como la cosmología y dar mayor importancia a asignaturas de corte artístico como la música.

De acuerdo con el neurocientífico existen algunos conceptos básicos que deben regir el nuevo paradigma educativo. A continuaciones algunos de ellos:
  1. "No hay misterios, sólo desconocidos". Según Llinás existen muchos mitos que pueden "dañar el cerebro" y obstaculizar el deseo de aprendizaje y la vocación innata del descubrimiento de los pequeños.
  2. "Todo lo que existe tiene una causa previa". Este principio tiene que ver con el llamado por la "educación en contexto". Todas las enseñanzas, dice Llinás, deben regirse por la causalidad. "No existen hechos aislados, pero el modelo educativo vigente tiende a mostrarlos como si estuvieran aislados de la complejidad en la que en realidad se inscriben".
  3. "La inducción es clave". “Vale la pena rescatar el valor de los sentidos, enseñar a aprender a través de ellos. A veces esta dimensión queda rezagada", dijo el científico durante la conferencia.
  4. Deducción. Para Llinás hace falta estimular la construcción de conocimiento, el pensamiento de los estudiantes. "Construye haciendo uso de tu mente".
  5. Parsimonia. En la educación hay que hacer todo a través del método más sencillo.
Para el investigador también es crucial que los gobiernos inviertan, como mínimo, el 1% de su PIB en ciencia. 

ORIGINAL: El Espectador
Por: Maria Luna Mendoza
29 ABR 2014

The Deadliest Animal in the World


What would you say is the most dangerous animal on Earth? Sharks? Snakes? Humans?

Of course the answer depends on how you define dangerous. Personally I’ve had a thing about sharks since the first time I saw Jaws. But if you’re judging by how many people are killed by an animal every year, then the answer isn’t any of the above. It’s mosquitoes.

When it comes to killing humans, no other animal even comes close. Take a look:


What makes mosquitoes so dangerous?
Despite their innocuous-sounding name—Spanish for “little fly”—they carry devastating diseases. The worst is malaria, which kills more than 600,000 people every year; another 200 million cases incapacitate people for days at a time. It threatens half of the world’s population and causes billions of dollars in lost productivity annually. Other mosquito-borne diseases include dengue fever, yellow fever, and encephalitis.

There are more than 2,500 species of mosquito, and mosquitoes are found in every region of the world except Antarctica. During the peak breeding seasons, they outnumber every other animal on Earth, except termites and ants. They were responsible for tens of thousands of deaths during the construction of the Panama Canal. And they affect population patterns on a grand scale: In many malarial zones, the disease drives people inland and away from the coast, where the climate is more welcoming to mosquitoes.

Considering their impact, you might expect mosquitoes to get more attention than they do. Sharks kill fewer than a dozen people every year and in the U.S. they get a week dedicated to them on TV every year. Mosquitoes kill 50,000 times as many people, but if there’s a TV channel that features Mosquito Week, I haven’t heard about it.

That’s why we’re having Mosquito Week on the Gates Notes.
Everything I’m posting this week is dedicated to this deadly creature. You can learn about my recent trip to Indonesia to see an ingenious way to combat dengue fever by inoculating not people, but mosquitoes. (Somehow this story involved me offering up my bare arm to a cage full of hungry mosquitoes so they could feed on my blood.) You can read a harrowing account of what it’s like to have malaria and hear from an inspiring Tanzanian scientist who’s fighting it. And I’ve shared a few thoughts from Melinda’s and my recent trip to Cambodia, where I saw some fascinating work that could point the way to eradicating malaria, which would be one of the greatest accomplishments in health ever.

I hope you’ll have a look around. I can’t promise that Anopheles gambiae will be quite as exciting as hammerheads and Great Whites. But maybe you’ll come away with a new appreciation for these flying masters of mayhem.


ORIGINAL: GatesNotes
April 25, 2014

martes, 29 de abril de 2014

Redefining Blended Learning


In over a year of writing about blended learning—largely defined as using an online delivery of content to augment face-to-face instruction—I’ve heard the same, tired criticism from too many well-meaning critics. They associate the approach with 
  • all parties needing to master overly intricate programming and Web 2.0 skills, 
  • cheapening the traditional classroom environment and t
  • he sacred student-teacher relationship.
Certainly, an advanced familiarity with all things technical comes in handy with blended learning. But set aside the 
  • most engaging screencasts, 
  • inspiring flipped classroom models, and 
  • nifty online teaching and assessment tools. 
At its core, blended learning enhances and promotes entrepreneurship and risk-taking, while giving students the skills they need to succeed in an increasingly flat, competitive, and digital 21st century. That’s it. It doesn’t really matter how “tech savvy” you are, as long as you feel inspired to rethink how you approach teaching.

With the Internet at my students’ fingertips—with smartphones, quite literally—I’m not the only or even the best source of knowledge. What I do possess, though, and what no computer could ever replicate, is a deep passion for my subjects. I’m excited and energetic about history and journalism, and I do whatever I can to infect my students with a similar love of learning and discovery. To the best of my ability, I allow and encourage self-directed learning (which is a large component of blended learning), and I don’t penalize failure harshly, so long as I see a clear passion for improvement.

How else do I define blended learning, and how do I employ it in my classroom? Here are five approaches that every teacher should consider.
  1. A large component of blended learning is self-directed learning. Whenever I assign a project or an essay, I allow students to propose their own topics. Too often, students are dependent on adults for guidance and direction. More often, we need to allow and encourage students to make their own decisions, even if that sometimes means seeing the consequences of their actions. After all, we learn equally from success and failure. At home and during class, my students explore online resources to inform their understanding. As challenging as it sometimes is, I refrain from offering immediate answers and solutions. If students can use the Internet to figure something out for themselves, they do. I might look like the bad guy, but I know it’s for their development.
  2. Along these lines, I also coach and encourage students to think about what constitutes “credible information,” and where to find it online. This is a crucial skill for students to master, no matter what profession they ultimately pursue. At home, students use the Internet to learn about their own subjects, whether from videos, articles, or other media. This is blended learning, but it’s also self-directed learning. As online learning tools advance, teachers will become increasingly responsible for guiding students to appropriate sources, rather than teaching material directly themselves.
  3. As I see it, a crucial part of the blended classroom environment includes teachers and students working and learning together. In my experience, this often involves using online tools. Three years ago, I worked with a former student to launch The Falconer, an online student news site. I can’t recount how many hours he and I spent watching “how-to” YouTube videos, learning not only about coding and maintaining a website, but also about recording podcasts, editing video, and streaming live events.
  4. Blended learning also calls for rethinking how we assess, with or without online tools. In the near future, competency-based learning, where students progress at their own pace to master a predetermined set of skills, will lead the way. Grades will become less and less significant, eventually vanishing completely. Higher education is already making this a reality. At College for America at Southern New Hampshire University (SNHU), students demonstrate mastery of 120 “competencies” rather than earn class credit. “Students show their mastery by completing tasks,” SNHU President Paul J. LeBlanc told me last May. “These are real-world hypotheticals. They’re not exams, they’re not the kind of isolated assignments you might get in a college class. They’re meant to be hypothetical, which mimic more closely how that competency is used in the real world.” I give grades because I have to, but in most cases, I allow students an infinite number of retakes.
  5. Similarly, blended learning also calls for rethinking how we teach, with or without online tools. Students want to see the relevance in what they learn, and that’s hard to accomplish by studying subjects in isolation. On that front, I admit that my own teaching needs improvement. I should do more to reach out to teachers in other fields who are interested in having students work on a shared project. After all, students might be told that learning about the Civil War or the quadratic formula is important, but unless they can put such knowledge to practical use, they will eventually forget it. But if students can see that a concept has multiple applications, especially to real-world scenarios, they will be much much likely to remember.

What are your thoughts on blended learning, and how it’s defined? I would love to hear your thoughts in the comments section below.

ORIGINAL: Spin Edu

Stanford bioengineers create circuit board modeled on the human brain

Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.


The Neurogrid circuit board can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.

For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed "Neurocore" chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.

Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.

"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."
Brain ferment

In his article, Boahen notes the larger context of neuromorphic research, including the European Union's Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.

Zooming from the big picture, Boahen's article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.

One of these efforts is IBM's SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.

Heidelberg University's BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.

Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.

In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.

Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen's goal of creating a system affordable enough to be widely used in research.

Speed and efficiency
But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.

By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore's cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.

For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen's neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person's brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.

A small prosthetic arm in Boahen's lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn't look like much, but its simple levers and joints hold hope for robotic limbs of the future.

Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.

In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.

"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

Tom Abate writes about the students, faculty and research of the School of Engineering. Amy Adams of Stanford University Communications contributed to this report.

For more Stanford experts in bioengineering and other topics, visit Stanford Experts.


ORIGINAL: Stanford
BY TOM ABATE
April 28, 2014

lunes, 28 de abril de 2014

At the origin of cell division


SISSA

Droplets of filamentous material enclosed in a lipid membrane: these are the models of a "simplified" cell used by the SISSA physicists Luca Giomi and Antonio DeSimone, who simulated the spontaneous emergence of cell motility and division -- that is, features of living material -- in inanimate "objects." The research is one of the cover stories of the April 10th online issue of the journal Physical Review Letters. Giomi and DeSimone's artificial cells are in fact computer models that mimic some of the physical properties of the materials making up the inner content and outer membrane of cells.

The two researchers varied some of the parameters of the materials, recording what happened: "our 'cells' are a 'bare bones' representation of a biological cell, which normally contains microtubules, elongated proteins enclosed in an essentially lipid cell membrane," explains Giomi, first author of the study. "The filaments contained in the 'cytoplasm' of our cells slide over one another exerting a force that we can control."

The force exerted by the filaments is the variable that competes with another force, the surface tension that keeps the membrane surrounding the droplet from collapsing. The generates a flow in the fluid surrounding the droplet, which in turn is propelled by such self-generated flow. When the flow becomes very strong, the droplet deforms to the point of dividing. "When the force of the flow prevails over the force that keeps the membrane together we have cellular division," explains DeSimone, director of the SISSA mathLab, SISSA's mathematical modelling and scientific computing laboratory.

"We showed that by acting on a single physical parameter in a very simple model we can reproduce similar effects to those obtained with experimental observations," continues DeSimone. Empirical observations on microtubule specimens have shown that these also move outside the cell environment, in a manner proportional to the energy they have (derived from ATP, the cell "fuel"). "Similarly, our droplets, fuelled by their 'inner' energy alone -- without forces acting from the outside -- are able to move and even divide."

"Acquiring motility and the ability to divide is a fundamental step for life and, according to our simulations, the laws governing these phenomena could be very simple. Observations like ours can prepare the way for the creation of functioning artificial cells, and not only," comments Giomi. "Our work is also useful for understanding the transition from non-living to living matter on our planet. The development of the early forms of life, in other words."

Chemists and biologists who study the origin of life don't have access to cells that are sufficiently simple to be observed directly. "Even the simplest organism existing today has undergone billions of years of evolution," explains Giomi, "and will always contain fairly complex structures. Starting from schematic organisms as we do is like turning the clock back to when the first rudimentary living beings made their first appearance. We are currently starting studies to understand how cell metabolism emerged."

VIDEO: Artificial cell simulation (courtesy of Physical Review Letters):http://goo.gl/vLDcbB
Source: Sissa Medialab


ORIGINAL: eScienceNews
April 16, 2014 - 20:28 in Physics & Chemistry

The disruptive potential of solar power

As costs fall, the importance of solar power to senior executives is rising.
The economics of solar power are improving. It is a far more cost-competitive power source today than it was in the mid-2000s, when installations and manufacturing were taking off, subsidies were generous, and investors were piling in. Consumption continued rising even as the MAC Global Solar Energy Index fell by 50 percent between 2011 and the end of 2013, a period when dozens of solar companies went bankrupt, shut down, or changed hands at fire-sale prices.

Image Original: MAC Global Solar Energy Index
The bottom line: the financial crisis, cheap natural gas, subsidy cuts by cash-strapped governments, and a flood of imports from Chinese solar-panel manufacturers have profoundly challenged the industry’s short-term performance. But they haven’t undermined its potential; indeed, global installations have continued to rise—by over 50 percent a year, on average, since 2006. The industry is poised to assume a bigger role in global energy markets; as it evolves, its impact on businesses and consumers will be significant and widespread. Utilities will probably be the first, but far from the only, major sector to feel solar’s disruptive potential.

Economic fundamentals 
Sharply declining costs are the key to this potential. The price US residential consumers pay to install rooftop solar PV (photovoltaic) systems has plummeted from nearly $7 per watt peak of best-in-class system capacity in 2008 to $4 or less in 2013.1 Most of this decline has been the result of steep reductions in upstream (or “hard”) costs, chiefly equipment. Module costs, for example, fell by nearly 30 percent a year between 2008 and 2013, while cumulative installations soared from 1.7 gigawatts in 2009 to an estimated 11 gigawatts by the end of 2013, according to GTM Research.

While module costs should continue to fall, even bigger opportunities lurk in the downstream (or “soft”) costs associated with installation and service. Financing, customer acquisition, regulatory incentives, and approvals collectively represent about half the expense of installing residential systems in the United States. Our research suggests that as they become cheaper, the overall costs to consumers are poised to fall to $2.30 by 2015 and to $1.60 by 2020.

These cost reductions will put solar within striking distance, in economic terms, of new construction for traditional power-generation technologies, such as coal, natural gas, and nuclear energy. That’s true not just for residential and commercial segments, where it is already cost competitive in many (though not all) geographies, but also, eventually, for industrial and wholesale markets. Exhibit 1 highlights the progress solar already has made toward “grid parity” in the residential segment and the remaining market opportunities as it comes further down the curve. China is investing serious money in renewables. Japan’s government is seeking to replace a significant portion of its nuclear capacity with solar in the wake of the Fukushima nuclear accident. And in the United States and Europe, solar adoption rates have more than quadrupled since 2009.

Exhibit 1
A sharp decline in installation costs for solar photovoltaic systems has boosted the competitiveness of solar power.

While these economic powerhouses represent the biggest prizes, they aren’t the only stories. Sun-drenched Saudi Arabia, for example, now considers solar sufficiently attractive to install substantial capacity by 2032,2 with an eye toward creating local jobs. And in Africa and India, where electric grids are patchy and unreliable, distributed generation is increasingly replacing diesel and electrifying areas previously without power. Economic fundamentals (and in some cases, such as Saudi Arabia, the desire to create local jobs) are creating a brighter future for solar.

Business consumption and investment
Solar’s changing economics are already influencing business consumption and investment. In consumption, a number of companies with large physical footprints and high power costs are installing commercial-scale rooftop solar systems, often at less than the current price of buying power from a utility. For example, Wal-Mart Stores has stated that it will switch to 100 percent renewable power by 2020, up from around 20 percent today. Mining and defense companies are looking to solar in remote and demanding environments. In the hospitality sector, Starwood Hotels and Resorts has partnered with NRG Solar to begin installing solar at its hotels. Verizon is spending $100 million on solar and fuel-cell technology to power its facilities and cell-network infrastructure. Why are companies doing such things? To

  • diversify their energy supply, s
  • ave money, and 
  • appeal to consumers. 
These steps are preliminary, but if they work, solar initiatives could scale up fast.

As for investment, solar’s long-term contracts and relative insulation from fuel-price fluctuations are proving increasingly attractive. The cost of capital also is falling. Institutional investors, insurance companies, and major banks are becoming more comfortable with the risks (such as weather uncertainty and the reliability of components) associated with long-term ownership of solar assets. Accordingly, investors are more and more willing to underwrite long-term debt positions for solar, often at costs of capital lower than those of traditional project finance.

Major players also are creating advanced financial products to meet solar’s investment profile. The best example of this to date is NRG Yield, and we expect other companies to unveil similar securities that pool renewable operating assets into packages for investors. Google has been an active tax-equity investor in renewable projects, deploying more than $1 billion since 2010. It also will be interesting to track the emergence of solar projects financed online via crowdsourcing (the best example is Solar Mosaic, which brings investors and solar-energy projects together). This approach could widen the pool of investors while reducing the cost of capital for smaller installations, in particular.

Disruptive potential
The utility sector represents a fascinating example of the potential for significant disruption as costs fall, even as solar’s scale remains relatively small. Although solar accounts for only less than half a percent of US electricity generation, the business model for utilities depends not so much on the current generation base as on installations of new capacity. Solar could seriously threaten the latter because its growth undermines the utilities’ ability to count on capturing all new demand, which historically has fueled a large share of annual revenue growth. (Price increases have accounted for the rest.)

Depending on the market, new solar installations could now account for up to half of new consumption (in the first ten months of 2013, more than 20 percent of new US installed capacity was solar). By altering the demand side of the equation, solar directly affects the amount of new capital that utilities can deploy at their predetermined return on equity. In effect, though solar will continue to generate a small share of the overall US energy supply, it could well have an outsize effect on the economics of utilities—and therefore on the industry’s structure and future (Exhibit 2).

Exhibit 2

Although solar power will continue to account for a small share of the overall US energy supply, it could well have an outsize effect on the economics of utilities.

That’s already happening in Europe. Over the last several years, the demand for power has fallen while the supply of renewables (including solar) has risen, driven down power prices, and depressed the penetration of conventional power sources. US utilities can learn many lessons from their European counterparts, which for the most part stood by while smaller, more nimble players led the way. Each US utility will have to manage the risks of solar differently. All of them, however, will have to do something.

Broader management implications
As solar becomes more economic, it will create new battlegrounds for business and new opportunities for consumers. When a solar panel goes up on a homeowner’s roof, the installer instantly develops a potentially sticky relationship with that customer. Since the solar installation often puts money in the homeowner’s pocket from day one, it is a relationship that can generate goodwill. But, most important, since solar panels are long-lived assets, often with power-purchase agreements lasting 15 or 20 years, the relationship also should be enduring.

That combination may make solar installers natural focal points for the provision of many products and services, from security systems to mortgages to data storage, thermostats, smoke detectors, energy-information services, and other in-home products. As a result, companies in a wide range of industries may benefit from innovative partnerships built on the deep customer relationships that solar players are likely to own. Tesla Motors already has a relationship with SolarCity, for example, to develop battery storage coupled with solar. It is easy to imagine future relationships between many other complementary players. These possibilities suggest a broader point: the solar story is no longer just about technology and regulation. Rather, business-model innovation and strong management practices will play an increasingly important role in the sector’s evolution and in the way it engages with a range of players from other industries. Segmenting customers, refining pricing strategies, driving down costs, and optimizing channel relationships all will figure prominently in the solar-energy ecosystem, as they do elsewhere.

As solar becomes integrated with energy-efficiency solutions, data analytics, and other technologies (such as storage), it will become an increasingly important element in the next generation of resource-related services and of the world’s coming resource revolution. In the not too distant future, a growing number of industries will have to take note of the promise, and sometimes the threat, of solar to business models based on traditional energy economics. But, in the meantime, the battle for the customer is taking place today, with long-term ramifications for existing industry structures.

About the authors
David Frankel is an associate principal in McKinsey’s San Francisco office, where Dickon Pinner is a principal; Ken Ostrowski is a director in the Atlanta office.

The authors would like to thank Stefan Heck, Sean Kane, and Farah Mandich for their contributions to this article.

ORIGINAL: McKinsey
by David Frankel, Kenneth Ostrowski, and Dickon Pinner 
April 2014 |

The First Poem Published in a Scientific Journal

An ode to the ocean’s bioluminescent marvels.

Image courtesy of the J. Woodland Hastings Lab, Harvard University
We’ve already seen science as a muse of painting, music, sculpture, and design. In 2001, the poetic muse struck Smith College life sciences professor and clock researcher Mary E. Harrington who, smitten by the circadian rhythms of the bioluminescent algae Gonyaulax polyedra, penned a poem about these whimsical organisms. It appeared on the pages of the June 2001 issue of the Journal of Biological Rhythms and is considered the first poem to be published in a strictly scientific journal. (I discovered it through a passing mention in the excellent Internal Time: Chronotypes, Social Jet Lag, and Why You’re So Tired.)


If the lazy dinoflagellate
should lay abed
refuse to photosynthesize,
realize:
the clock will not slow

but it will grow fait
weaker
weaker

barely whispering at the end
”rise”
”rise”

to little effect.
The recalcitrant Gonyaulax
arms crossed
snorts
“No longer will
they call my life
(my life!)
‘just hands’.
I am sticking to the sea bed!”

ORIGINAL: Brain Pickings

domingo, 27 de abril de 2014

How to Keep Milk from Spoiling Without Refrigeration



For centuries, before refrigeration, an old Russian practice was to drop a frog into a bucket of milk to keep the milk from spoiling. In modern times, many believed that this was nothing more than an old wives' tale. But researchers at Moscow State University, led by organic chemist Dr. Albert Lebedev, have shown that there could be some benefit to doing this, though of course in the end you'll be drinking milk that a frog was in.

Ice boxes first became available to consumers in the early to mid-19th century and, with that, the ice trade became big business. New England and Norway became major purveyors of ice, but anywhere it was cold, ice was a major export. Usually made out of wood with tin or zinc walls and insulation material like sawdust, cork, or straw, ice boxes were popular until they were rendered obsolete by the electrical refrigerator starting around the 1930s.

Jacob Perkins invented the first version of the refrigerator in 1834 when it was discovered that the hazardous compound ammonia, when liquefied, had a cooling effect. But it wasn't until the late 1920s when Freon was developed by General Motors and DuPont as a "nontoxic" cooling agent, and replaced ammonia, that refrigerators for consumers started to gain traction.

Despite the prevalence of ice in parts of Russia, in certain small rural Russian villages many didn't have access to ice boxes, so they had to find ways to keep things cold and unspoiled. A practice developed, that continued into the 20th century, as described by Dr. Lebedev from memories from his childhood,

[For] small portions of milk to drink, they used to put [a] frog inside… A small frog over there could prevent the milk from being spoiled.

This rather curious practice was an inspiration for a study and, then, a discovery that may lead to a significant new source of antibiotics. In 2010, scientists from United Arab Emirates University made an announcement that the secretions from certain frogs' skins have antibacterial and antifungal properties. Using species native to African countries, they studied the compounds coming from the frogs, which are known as antimicrobial peptides and are a string of amino acids.

After isolating these compounds, they began testing them against various bacterial infections. For example, the dreaded "Iraqibacter," a drug-resistant bacterial infection that has been known to hit wounded soldiers in Iraq could (once again, potentially) be fought with a compound found in the skin of a mink frog that are native to North America. Secretions from a foothill four-legged frog may have the potential to fight the well-known resistant MRSA staph skin infection.

In 2012, scientists from Moscow State University decided to take this a step further by breaking down the compounds and studying the individual peptides. In a study entitled "Composition and Antimicrobial Activity of Skin Peptidome of Russian Brown Frogs" published in the Journal of Proteome Research in November 2012, and using Russian brown frogs (which are edible and considered a delicacy), they extracted secretions by applying electrodes.

What came out was a cocktail of 76 different peptides that all had different properties. Michael Zasloff, now a professor at Georgetown University, but formerly a researcher with the National Institutes of Health said in an interview that, "What is amazing is that no two frogs have the same cocktail. They're all different, and all beautifully tuned to deal with the microbes that these animals face."

As promising as the results are so far, many scientists are skeptical of any real benefit coming from them. For instance, Jun O. Liu, a professor of pharmacology at John Hopkins University School of Medicine, stated in reference to other apparent natural occurring "magic antibiotics," "There are natural substances that work in a lab beautifully but then when you give it to a human it's totally inactive or it's toxic."

While this all may or may not ultimately be medicinally helpful for humans, beginning centuries ago certain Russians seem to have been on to something with putting frogs in milk to delay it spoiling. Although, I think we can all agree that putting a frog in one's milk takes a back seat to the other age-old way to store milk without refrigeration- making it into delicious cheese.

If you liked this article, you might also enjoy:

Bonus Facts:
The world frog population is currently dwindling. For example, the United States amphibian population (which includes frogs, toads, salamanders, and newts) has been declining by 3.7 percent per year of late, according to a U.S. Geological Survey released in May 2013. While the study didn't give firm answers to why this happening, the scientists in the report speculated that possible factors could include climate change, disease, and drought.
"Freon" is the trade name used for a group of chemicals known as chlorofluorocarbons, or CFCs. Refrigerators and air conditioners were developed to use these chemicals and were sold as consumer products for the home. Today, of course, it has since been discovered that there are irrefutable links to CFCs, Freon, and the depletion of the Ozone layer.

Matt Blitz writes for the wildly popular interesting fact website TodayIFoundOut.com. To subscribe to Today I Found Out's "Daily Knowledge" newsletter, click here or like them on Facebook here. You can also check 'em out on YouTube here.

This post has been republished with permission from TodayIFoundOut.com. Image by Horatiu Curutiu under Creative Commons license.

ORIGINAL: Gizmodo
Matt Blitz

Concurso convoca la buena inventiva paisa

En la primera edición de ¿Quién se le mide? hubo 1600 participantes. Este año se buscan sistemas en frío, desfibradoras, un programa de esterilización y más.


El año pasado, Rafael Jairo Vides, Ferney López y Cristian Fajardo, tres muchachos de Urabá, se le midieron al reto de inventarse un aparato que sirviera para detectar y destruir minas antipersonal, algo esencial y urgente en un país donde estos artefactos asesinos han matado, entre 1990 y marzo de 2014, a 2.167 personas y herido a 8.515, especialmente soldados y campesinos.

El resultado fue una especie de robot, en forma de carrito, que se mueve por los campos activado a control remoto y que cumple a cabalidad con el reto impuesto por la Gobernación en su concurso ¿Quién se le mide?, que buscaba detectar las minas sin que en el proceso murieran personas o quedaran amputadas, como suele ocurrir.

Este año el concurso llega su segunda edición con mejores expectativas que en la primera, en la que participaron 1.600 investigadores -entre personas particulares y empresas- del cual salieron 20 soluciones a problemáticas comunes de la sociedad antioqueña. Se espera que la participación sea mayor.

-Para nosotros fue muy positivo participar, porque hemos podido crecer y hasta mejorar el prototipo que inventamos, que detecta metales, pero ahora lo hemos mejorado para que detecte cualquier tipo de materiales-, comenta Rafael Jairo, que con sus dos compañeros no se queda quieto. Ahora han inventado un artefacto con la misma función, pero aéreo, que está siendo probado en terrenos ideales, como el Batallón Voltígeros, de la Brigada 17, en Urabá.

-Haber ganado el concurso nos ha servido para meternos en otros proyectos, porque nos gusta crear-, apunta Rafael Jairo, egresado del Sena e investigador incansable.

Concurso abierto

Alejandro Olaya, director de Ciencia y Tecnología de la Gobernación de Antioquia, dependencia desde la que se organiza el concurso, aclara que pueden participar todo tipo de investigadores: particulares, grupos de investigación y empresas, pues se aplica el principio de innovación abierta y lo importante es la solución y estimular emprendimiento.

-Cualquier persona, un estudiante emprendedor, un inventor particular, las universidades o grupos científicos, pueden participar. La intención es conectar a las personas que creemos que pueden inventar soluciones con las comunidades que padecen los problemas reales-, explica.

Recuerda que el año pasado, en la primera edición, hubo amplitud y diversidad de participantes, lo que habla de una comunidad antioqueña inquieta y bastante creativa, en la que no hay límites para la creación.

La empresa JM Estrada, por ejemplo, que tiene 150 años en el mercado creando soluciones para el agro en temas como el café, la caña y la ganadería, se le midió y consiguió ganar tres retos, con el ingeniero Jorge Estrada a la cabeza:
  • Uno: la fabricación de una secadora a gas para el grano de cacao. 
  • Dos: el diseño de una tostadora para potencializar las reacciones químicas de los cafés especiales. 
  • Y tres: un secador solar para deshidratación de plantas medicinales y condimentarias.
-En este último nos presentamos varios pero nos dieron el premio a dos, a nosotros y a un muchacho que presentó un sistema distinto-, señala Margarita Castaño, subgerente de JM, que valora el concurso por lo abierto y por las posibilidades que da de que personas con muy pocos recursos derrochen su creatividad y compitan sin complejos con empresas como la suya, que tiene más capital y tradición.

-El premio nos permitió destinar un grupo especial, con ingenieros y técnicos a crear los prototipos, que ya se comercializan-, afirma.

Una de las ideas básicas es que además de que los inventos solucionen problemas reales de comunidades agrarias de las regiones paisas, los ganadores -e incluso los participantes que no ganan- puedan desarrollar emprendimiento.

-Nosotros premiamos prototipos, no ideas (es decir, el aparato en sí), pero los creadores son dueños de su invento y lo pueden desarrollar y crear empresa con ellos-, advierte Olaya.

Es el caso de los anteriores ganadores citados y de Jorge Enrique García, un muchacho de Medellín que ganó el reto de crear un dispositivo para la recolección de aguacates a una altura superior a los 4 metros, de tal manera que la fruta no se golpee al bajarla y agilice el proceso de recolección.

-Creamos una vara como las que hay en el mercado, pero esta permite que la fruta descienda por un tubo que es una parte en tela y otra en malla, la fruta cae a la malla y no se golpea, el recolector inmediatamente la pasa a una canasta-, explica Jorge.

Añade que incluso con la misma vara se pueden recolectar mangos, naranjas y posiblemente otros productos de características similares a las del aguacate. Su avance fue tal, que ya registró la marca Bajafácil, que masificará el producto para venderlo en regiones como el Bajo Cauca, Santa Fe de Antioquia y los departamentos de Caldas y Santander, donde hay cultivos de la fruta.

Admite que la malla donde caía el aguacate fue mejorada, pues la primera se dañaba con frutas de pesos mayores a un kilo. Ahora les instaló una malla con orificios más pequeños, pero impuso el límite de coger aguacates que no superen el kilo.

Con Tecnnova se elige
¿Pero quién elige ganadores y quién define los retos?

El encargado en este caso es Tecnnova, una organización sin ánimo de lucro creada por doce universidades, que participa en tres instancias clave del concurso ¿Quién se le mide?: una es identificar, en las diferentes secretarías de la Gobernación, las problemáticas que ellas buscan resolver en las comunidades.

-Teniendo claridad sobre los retos, se hace un paneo de las problemáticas y una priorización-, detalla Alejandro Franco, director de Tecnnova.

En la fase dos, cuando ya se presentan los inventos, Tecnnova y expertos temáticos, con ayuda de los afectados por las problemáticas, escogen a los ganadores, y en la etapa final, la tres, se supervisa y asesora que los recursos del premio se inviertan en el desarrollo de la solución ganadora.

-Del concurso hay que resaltar dos aspectos: que se logran soluciones reales a problemas reales, que es su componente social, y que el creador se vuelve emprendedor, que crece y desarrolla su negocio-.

Este año hay 20 retos para temas tan diversos como la minería, la salud, la agricultura, la agroindustria, la infraestructura y para los cafés especiales. Uno muy llamativo, por ejemplo, es el de la creación de un sistema en frío para llevarles las vacunas a los pueblos indígenas.




En Antioquia hay aproximadamente 35.000 indígenas, de los cuales 29.200 habitan zonas rurales de 31 municipios. De esta población rural, el 41,4 por ciento son población Emberá ubicada en los municipios de Dabeiba, Urrao, Vigía del Fuerte, Ituango, Frontino y Murindó, asentados en zonas de difícil acceso, a 3 o 4 días de camino.

El reto es desarrollar un equipo que permita llevar las vacunas sin que se dañen, "que sea portátil y liviano para el transporte y que no requiera energía eléctrica permanente para su operación", se explica en el reto.

También hay un reto para crear un programa de control de la fertilidad en gatos y perros callejeros.

Los premios oscilan entre los 20 y los 30 millones de pesos y el monto está relacionado con el tipo de solución. No es lo mismo crear un programa de software que un equipo o una máquina, que requieren inversión en instrumentos y otras gastos, aclara Alejandro Olaya. 

Es de anotar, precisan Olaya y Alejandro Franco, que soluciones a muchos de los retos ya existen en el mundo, pero muchas veces no aplican o no son funcionales a la región antioqueña, de montañas quebradas, abundante vegetación y a veces impenetrables.

-La máquina desfibradora de fique ya existe en Vietnam, pero funciona con energía eléctrica, y en nuestras regiones donde está el fique no hay electricidad, entonces hay que buscar otro tipo de energía-, comenta Alejandro Olaya para referirse a uno de los retos de este año. 

La emoción de crear y ver el desarrollo de los inventos asociado al crecimiento empresarial y el emprendimiento llenan las expectativas de los que se le miden a crear soluciones. Se siente en las palabras de Jorge Enrique García, el creador del recolector de aguacate:

-Soy diseñador gráfico pero empírico en diseño industrial, algo que me gusta y por eso me metí en este cuento y lo disfruto mucho.

INFORME

PLAZO HASTA EL 19 DE AGOSTO DE 2014
  • El concurso de este año fue lanzado el pasado 2 de abril y el plazo para entregar los inventos vence en 19 de agosto de este año.
  • Se otorgan premios entre los 20 y los 30 millones de pesos.
  • Los ganadores de los retos son dueños intelectuales de sus creaciones.
  • Tecnnova está posicionada para conectar las capacidades de los Grupos de Investigación e integrar soluciones completas que se conectan con la realidad.
  • Más detalles en el link www.antioquia.gov.co/index.php/iquien-se-le-mide/

EN DEFINITIVA
El concurso ¿Quién se le mide? busca premiar la creatividad, estimular el emprendimiento y lograr soluciones a problemas reales que padecen las comunidades antioqueñas.

ORIGINAL: El Colombiano
Por GUSTAVO OSPINA ZAPATA
27 de abril de 2014

Coming Soon: New Smart Biosensor That Directs Cells To Kill Cancer. Surgery Glasses

These biosensors can further be customised to recognise factors of relevance to various patients' needs. 

Monday, April 21, 2014: Biologists at the Northwestern University's McCormick school of engineering and applied science have developed a ground breaking technology that could modify human cells to create therapeutics used in turn to selectively target and destroy tumour cells in the human body without disrupting healthy cells. The unique protein biosensor engineers cells to kill cancer by helping them effectively distinguish between healthy and cancerous cells.

Add caption
While sitting on the surface of a cell, the biosensor can be programmed to sense its immediate environment for specific factors following which it sends a signal to the engineered cell's nucleus. This triggers a gene expression programme within the cell. "Till date, there was no way to engineer cells in a manner that allowed them to sense key pieces of information about their environment, which could indicate whether the engineered cell is in healthy tissue or sitting next to a tumour," Joshua Leonard, an assistant professor at Northwestern University's McCormick school of engineering and applied science was quoted as saying. 

Moreover, the programme is activated only in the vicinity of tumour cells, thereby minimising any side effects. These biosensors can further be customised to recognise factors of relevance to various patients' needs. "In that way, you could programme a cell-based therapy to specify which cells it should kill," Leonard added.

Meanmwhile, a team of scientists at Washington University School of Medicine in St. Louis (WUSTL) and the University of Arizona (UA) have developed a new pair of hi-tech glasses that can help surgeons to detect cancer cells. These glasses will help surgeons to visualise cancer cells which will glow blue when viewed through these glasses during surgeries. Cancer cells are invisible in normal optics even if you are viewing through a high-powered magnifying device. This innovative technology incorporates a custom video, a head mounted display and then inject a blue dye into a patient. This will specifically bind to cancer cells and makes them glow. Doctors can then easily differentiate cancer cells from healthy cells and can make sure that no tumour cells are left over during surgery. It can detect and remove tumours as small as 1mm

Saurabh Singh, EFYTIMES News Network 

ORIGINAL: EFY Times

jueves, 24 de abril de 2014

The Truth about Google X: An exclusive look behind the secretive lab's closed doors


SPACE ELEVATORS, TELEPORTATION, HOVERBOARDS, AND DRIVERLESS CARS: THE TOP-SECRET GOOGLE X INNOVATION LAB OPENS UP ABOUT WHAT IT DOES--AND HOW IT THINKS.




Astro Teller is sharing a story about something bad. Or maybe it's something good. At Google X, it's sometimes hard to know the difference.

Teller is the scientist who directs day-to-day work at the search ­giant's intensely private innovation lab, which is devoted to finding unusual solutions to huge global problems. He isn't the president or chairman of X, however; his actual title, as his etched-glass business card proclaims, is Captain of Moonshots--"moonshots" being his catchall description for audacious innovations that have a slim chance of succeeding but might revolutionize the world if they do. It is evening in Mountain View, California, dinnertime in a noisy restaurant, and Teller is recounting over the din how earlier in the day he had to give some unwelcome news to his bosses, Google cofounder Sergey Brin and CFO Patrick Pichette. "It was a complicated meeting," saysTeller, 43, sighing a bit. "I was telling them that one of our groups was having a hard time, that we needed to course-correct, and that it was going to cost some money. Not a trivial amount." Teller's financial team was worried; so was he. But Pichette listened to the problem and essentially said, "Thanks for telling me as soon as you knew. We'll make it work."

At first, it seems Teller's point is that the tolerance for setbacks at Google X is uncharacteristically high--a situation helped along by his bosses' zeal for the work being done there and by his parent company's extraordinary, almost ungodly, profitability. But this is actually just part of the story. There happens to be a slack line--a low tightrope--slung between trees outside the Google X offices. After the meeting, the three men walked outside, took off their shoes, and gave the line a go for 20 minutes. Pichette is quite good at walking back and forth; Brin slightly less so; Teller not at all. But they all took turns balancing on the rope, falling frequently, and getting back on. The slack line is groin-high. "It looked like a fail video from YouTube," Teller says. And that's really his message here. "When these guys are willing to fall, groan, and get up--and they're in their socks?" He leans back and pauses, as if to say: This is the essence of Google X. When the leadership can fail in full view, "then it gives everyone permission to be more like that."

Failure is not precisely the goal at Google X. But in many respects it is the means. By the time Teller and I speak, I have spent most of the day inside his lab, which no journalist has previously been allowed to explore. Throughout the morning and afternoon I visited a variety of work spaces and talked at length with members of the Google X Rapid Evaluation Team, or "Rapid Eval," as they're known, about how they vet ideas and test out the most promising ones, primarily by doing everything humanly and technologically possible to make them fall apart. Rapid Eval is the start of the innovative process at X; it is a method that emphasizes rejecting ideas much more than affirming them. That is why it seemed to me that X--which is what those who work there usually call it--sometimes resembled a cult of failure. As Rich DeVaul, the head of Rapid Eval, says: "Why put off failing until tomorrow or next week if you can fail now?" Over dinner, Teller tells me he sometimes gives a hug to people who admit mistakes or defeat in group meetings.

Illustration by Owen Gildersleeve, Photo by Sam Hofman
THE GOOGLE X WAY
Wild concepts must survive ­rigorous ­vetting. Here's how one idea--­Wi-Fi delivery system Project Loon--­progressed.

PROBLEM IDENTIFIED: Google X's Rapid Evaluation team bats around lots of issues worth tackling. Project Loon actually started as an idea involving connections between mobile devices. But in June 2011, Rapid Eval head Rich DeVaul decided to shift focus toward increasing Internet access for rural or poor areas.

IDEA DEVELOPED: Lockheed is working on a high-altitude communication airship that can stay in one spot, but keeping such a craft stationary is extremely difficult. DeVaul had an insight: What if an airship floats away but there's another one behind it? In other words: balloons.

SOLUTION TESTED: DeVaul bought some $80 weather balloons online and assembled radio transmitters in a cardboard box that could be attached. Then he launched the contraption at the San Luis reservoir, an hour southeast of Google, and drove along under it in his Subaru.

PROTOTYPE BUILT: X executives commissioned Loon as an official project in August 2011, hiring a team to build a small fleet of prototypes. Xer Mitch Heinrichbegan to work on a Loon antenna; his team built a small house in their shop to see how the antenna might attach to customers' residences.

PRODUCT INTRODUCED: X brought in entrepreneur Mike Cassidy to manage the project's rollout as an actual business. The first step was a pilot program in New Zealand, where Loon went live, temporarily, in June 2013. As X now weighs interest from global telecom providers, the team is considering which business models might work best.

X does not employ your typical Silicon Valley types. Google already has a large lab division, Google Research, that is devoted mainly to computer science and Internet technologies. The distinction is sometimes framed this way: Google Research is mostly bits; Google X is mostly atoms. In other words, X is tasked with making actual objects that interact with the physical world, which to a certain extent gives logical coherence to the four main projects that have so far emerged from X: driverless cars, Google Glass,high-­altitude Wi-Fi balloons, and glucose-monitoring contact lenses. Mostly, X seeks out people who want to build stuff, and who won't get easily daunted.

Inside the lab, now more than 250 ­employees strong, I met an idiosyncratic troupe of former park rangers, sculptors, philosophers, and machinists; one X scientist has won two Academy Awards for special effects. Teller himself has written a novel, worked in finance, and earned a PhD in artificial intelligence. One recent hire spent five years of his evenings and weekends building a helicopter in his garage. It actually works, and he flew it regularly, which seems insane to me. But his technology skills alone did not get him the job. The helicopter did. "The classic definition of an expert is someone who knows more and more about less and less until they know everything about nothing," says DeVaul. "And people like that can be extremely useful in a very focused way. But these are really not X people. What we want, in a sense, are people who know less and less about more and more."

If there's a master plan behind X, it's that a frictional arrangement of ragtag intellects is the best hope for creating products that can solve the world's most intractable issues. Yet Google X, as Teller describes it, is an experiment in itself--an effort to reconfigure the process by which a corporate lab functions, in this case by taking incredible risks across a wide variety of technological domains, and by not hesitating to stray far from its parent company's business. We don't yet know if this will prove to be genius or folly. There's actually no historical model, no ­precedent, for what these people are doing.

But in some ways that makes sense. Google finds itself at a juncture in history that has not come before, and may not come again. The company is almost unimaginably rich and stocked with talent; it is hitting its peak of influence at a moment when networks and computing power and artificial intelligence are coalescing in what many technologists describe as (to borrow the Valley's most popular meme) "the second machine age." In addition, it is trying hard to develop another huge core business to augment its massive search division. So why not do it through X? To Teller, this failure-loving lab has simply stepped into the breach. Small companies don't feel they have the resources to take moonshots. Big companies think it'll rattle shareholders. Government leaders believe there's not enough money, or that Congress will characterize a misstep or failure as a scandal. These days, when it comes to Hail Mary innovation, "Everyone thinks it's somebody's else's job," Teller says.

It is worth noting that X's moonshots are not as purely altruistic as Google likes to make them sound. While self-driving cars will almost certainly save lives, for instance, they will also free up drivers to do web searches and use Gmail. Wi-Fi balloons could result in a billion more Google users. Still, it's hard not to appreciate that these ideas, along with others coming from X, are breathtakingly idealistic. When I ask Teller why Google has chosen to invest in X rather than something that might appeal more to Wall Street, he dismisses the premise. Then he cracks a smile. "That's a false choice," he says. "Why do we have to pick?"


Google X is situated at the edge of the Google campus, housed mostly in a couple of three-story red-brick buildings. The lab has no sign in front, just as it has no official website ("What would we put on the website, anyway?" Teller asks). The main building's entrance leads into a small, self-serve coffee bar. The aesthetic is modern, austere, ­industrial. To the left is a cavernous room with dozens of cubicles and several conference rooms; to the right is a bike rack and a lunchroom with a stern warning posted that only X employees are allowed. Otherwise, there's little indication you're in a supersecret lab. Most of the collaborative workshops are downstairs, in high-ceilinged rooms with whimsical names such as "Castle Grayskull," and are cluttered with electronic paraphernalia and Xers bent over laptops.

The origins of X date to around 2009, when Brin and Google ­cofounder Larry Pageconceived of a position called Director of Other; this person would oversee ideas far from Google's core search business. This notion evolved into X around 2010, thanks to Google engineer Sebastian Thrun's effort, backed by Brin and Page, to build a driverless car. The X lab grew up around that endeavor, with Thrun in charge. Thrun chose Teller as one of his codirectors, but when Thrun was drawn deeper into developing the car technology (and later into his online educational startup, ­Udacity), he gave up on overseeing other X projects. That's when Teller assumed day-to-day responsibilities.

There are differing explanations for what the X actually stands for. At first it was simply a placeholder for a better name, but these days it usually denotes the search for solutions that are better by a factor of 10. Some of the Xers I met, however, think of the X as representing an organization willing to build technologies that are 10 years away from making a large impact.

This in itself is fairly unique. Once upon a time, corporate labs invested a chunk of their R&D budget in risky, long-term projects, but an increasing focus on quarterly earnings, and the realization that it can be exceedingly hard to recoup an investment in far-off research, ended almost all such efforts. These days, it's considered more sensible for a company to fund short-term research--or if it wants to think far into the future, to either buy rights to an embryonic idea that arises from university research or a government lab, or to swallow up an innovative startup. Teller and Brin are not averse to doing this; for example, the wind-energy company Makani was recently bought by Google and folded into X. But Google and X have often rejected the conventional business wisdom in favor of hatching their own wild-eyed research schemes, and then waiting patiently for them to mature. Recently, when Page was challenged on an earnings call about the sums he was pouring into R&D, he made no effort to excuse it. "My struggle in general is to get people to spend money on long-term R&D," he said, noting that the amounts he was investing were modest in light of Google's profits. Then he chided the financial community: Shouldn't they be asking him to make more big, risky, long-term investments, not fewer?
Rich DeVaul heads the Rapid Evaluation team. "If there's a completely crazy, lame idea, then it's probably coming from me."

Generally speaking, there are three criteria that X projects share. All must address a problem that affects millions--or better yet, billions--of people. All must utilize a radical solution that has at least a component that resembles science fiction. And all must tap technologies that are now (or very nearly) obtainable. But to DeVaul, the head of Rapid Eval, there's another, more unifying principle that connects the three criteria: No idea should be incremental. This sounds terribly clichéd, DeVaul admits; the Silicon Valley refrain of "taking huge risks" is getting hackneyed and hollow. But the rejection of incrementalism, he says, is not because he and his colleagues believe it's pointless for ideological reasons. They believe it for practical reasons. "It's so hard to do almost anything in this world," he says. "Getting out of bed in the morning can be hard for me. But attacking a problem that is twice as big or 10 times as big is not twice or 10 times as hard."

DeVaul insists that it's often just as easy, or easier, to make inroads on the biggest problems "than to try to optimize the next 5% or 2% out of some process." Think about cars, he tells me. If you want to design a car that gets 80 mpg, it requires a lot of work, yet it really doesn't address the fundamental problem of global fuel resources and emissions. But if you want to design a car that gets 500 mpg, which actually does attack the problem, you are by necessity freed from convention, since you can't possibly improve an existing automotive design by such a degree. Instead you start over, reexamining what a car really is. You think of different kinds of motors and fuels, or of space-age materials of such gossamer weight and iron durability that they alter the physics of transportation. Or you dump the idea of cars altogether in favor of a ­substitute. And then maybe, just maybe, you come up with something worthy of X.

DeVaul is leaning back on a chair in a big ground-floor conference room at X. He's brought me here to demonstrate how the Rapid Eval team discusses ideas. We're joined around an oblong wood table by two of his colleagues, Dan Piponi and Mitch Heinrich. The men are a study in intellectual contrasts. Piponi, 47, is soft-spoken, laconic, British--a mathematician and theoretical physicist and the winner of those Oscars. Even among the bright minds at Google X, he's regarded as freakishly smart. Heinrich, the lab's young design guru, gives off an affable art-school vibe. On his own initiative, he's built what's known as the design kitchen, a large fabrication shop that's stocked with 3-D printers, table saws, and sophisticated lathes in a building adjacent to the primary X lab. He brings a plastic tub stuffed with old eyeglass frames to the Rapid Eval session. "These were some early prototypes for Glass," he explains, randomly pulling out some circuit boards and a few terrifically ugly designs. They weren't intended for the market, he says, but to show his colleagues that what they were conceptualizing could indeed be built.


DeVaul, 43, completes the trio. He has a PhD from MIT and worked at Apple for several years before coming to Google. It is difficult to figure out precisely what he studied in college--after 10 minutes of explaining, it sounds like some mashup of design, physics, anthropology, and ­machine learning. As such, he can talk a blue streak on a dazzling range of topics: crime, communications, computers, material science, robotics. It was DeVaul, in fact, who came up with the idea for Project Loon, as those ­Wi-Fi balloons are officially known. He tried desperately to make it fail on technological grounds but found he could not, so he agreed to run the project for about a year before returning to Rapid Eval.

In some respects, watching his group in action is like watching an improv team warm up--ideas are bounced about quickly, analytically, ­kinetically, in an effort to make them stick or lead toward something better. The team on most Rapid Eval sessions numbers about half a dozen, including DeVaul, Piponi, and Heinrich (and sometimes Teller); they meet for lunch once a week to ­discuss suggestions that have bubbled up from within X or have filtered in from outside--from their parent company, say, or somebody's ­acquaintance in academia. Later in the week, one or two of the best suggestions are brought up again more formally for further consideration. Mostly the team looks at the scale of the issue, the impact of the proposed fix, and the technological risks. Will it really solve the problem? Can the thing actually be built? Then they ­consider the social risks. If we can build it, will it--can it--actually be used?

THIS IS THE POINT IN THE CONVERSATION WHEN WE START TALKING, QUITE SERIOUSLY, ABOUT HOVERBOARDS AND SPACE ELEVATORS.

There's a reason they factor these questions into their early calculus. When you're explicitly trying to imagine products that have no real counterparts in our culture, the obstacles have to be imagined, too. With driverless cars, for instance, there remain unresolved complexities of state laws, infrastructure, and insurance; for Google Glass, there are huge ongoing privacy issues. But if the team believes these kinds of hurdles are surmountable and is still sufficiently curious about a technology by the end of the discussion, they'll ask Heinrich or Piponi to build a crude prototype, ideally in a few days. Once they're satisfied that it can work, they move toward getting the brass to officially commission the project. They will not say how often this has happened, except that it's exceedingly rare. "It's a really high bar to say, 'This is going to be a new Google X project,' " says DeVaul. And that doesn't mean it won't be killed as it evolves. It's a much higher bar to actually launch a Google X project, he points out. "Sometimes the problems at Google X are very easy to frame, such as two-thirds of the world does not have reliable, affordable Internet access." That's what led him to Project Loon. "But some problems are easier to see in the rearview mirror. Imagine how hard it would be to explain to your pre-smartphone self how much this is going to change your life." DeVaul says this is the type of thinking that led to Google Glass. "It's a matter of looking back from the future, where ­everyone walks around with smart glasses and no one leaves their house without them. And then it becomes obvious: 'Well, of course I want to be connected to information, but in a way that's minimally invasive, and minimally imposes on my attention.'"

He makes it sound quite reasonable. But this is also the point in the conversation when we start talking, quite seriously, about hoverboards and space elevators.

DeVaul is an avid skateboarder, and building a hoverboard is something that he has long imagined. "I just wanted one," he tells me, shrugging. When he brought it up for discussion last year--"If there's a completely crazy, lame idea, then it's probably coming from me," he says--the group actually discerned some practical applications. In industrial settings, moving heavy things on a frictionless platform could be not only valuable but transformative. "Imagine a giant fulfillment center like Amazon's, where all the pallets can levitate and move around," DeVaul says. "Or what about a lab where all the heavy equipment would come to me?"

"Dan, show him the hoverboard you built," says Heinrich.

"Right," says Piponi, sitting up and clearing his throat. In front of him is a small, shiny rectangle, about the size of a hardcover book. On the surface is a tight configuration of circular magnets. "So the first question here relates to the physics," Piponi says. "Can you actually have an object hovering about? And so people try really hard with magnets--to find some arrangement that keeps something hovering." This is the logic behind the superfast magnetic-levitation trains now used in China and Japan. But these "mag-lev" systems have a stabilizing structure that keeps trains in place as they hover and move forward in only one direction. That couldn't quite translate into an open floor plan of magnets that keep a hoverboard steadily aloft and free to move in any direction. One problem, as Piponi explains, is that magnets tend to keep shifting polarities, so your hoverboard would constantly flip over as you floated around moving from a state of repulsion to attraction with the magnets. Any skateboarder could tell you what that means: Your hoverboard would suck.

But that's exactly the sort of problem X is designed to attack. "There are loopholes in this theorem that you have to find," Piponi says. "There are materials that are kind of weird, that don't behave like magnets normally do." Piponi discovered that a very thin slice of a certain type of graphite would actually work well on a small bed of magnets. So he built one for the Rapid Eval team. He pushes his small hoverboard across the table to me, and I try it. The graphite slice, not much larger than a quarter, floats slightly above the magnets, gliding in any direction with the most ethereal push. When DeVaul first saw this, he tells me, he was astounded.

Yet by that point, Piponi had already moved on. As he did the calculations involved in expanding the small hoverboard up to a usable size, the physics suggested that at a certain point the weight of the board would knock it off its cushion of air. Other technologies could conceivably help (you might try using special materials at supercool temperatures), but the team ­decided that would create huge additional costs and complications--costs that would not be justified by the project's relatively modest social and economic impact. So the Google X hoverboard was shelved. "When we let it go, it's a positive thing," DeVaul says. "We're saying, 'This is great: Now we get to work on other things.'"

Like space elevators, something X was widely rumored to be working on but has never confirmed until now. "You know what a space ­elevator is, right?" DeVaul asks. He ticks off the essential facts--a cable attached to a satellite fixed in space, tens of thousands of miles above Earth. To DeVaul, it would no doubt satisfy the X criteria of something straight out of sci-fi. And it would presumably be transformative by reducing space travel to a fraction of its present cost: Transport ships would clip on to the cable and cruise up to a space station. One could go up while another was heading down. "It would be a massive capital investment," DeVaul says, but after that "it could take you from ground to orbit with a net of basically zero energy. It drives down the space-access costs, operationally, to being incredibly low."

Not surprisingly, the team encountered a stumbling block. If scaling problems are what brought hoverboards down to earth, material-science issues crashed the space elevator. The team knew the cable would have to be exceptionally strong-- "at least a hundred times stronger than the strongest steel that we have," by ­Piponi's calculations. He found one material that could do this: carbon nanotubes. But no one has manufactured a perfectly formed carbon nanotube strand longer than a meter. And so elevators "were put in a deep freeze," as Heinrich says, and the team decided to keep tabs on any advances in the carbon nanotube field.
Mitch Heinrich created Google X's design kitchen, where he and other team members build simple prototypes for big ideas.

THERE'S A MOMENT OF SILENCE IN THE ROOM. "I KNOW THAT SOUNDS COMPLETELY INSANE," DEVAUL SAYS. BUT I'M NOT SURE IT SOUNDS CRAZY TO HIM.

The larger lesson here is that any Google X idea that hinges on some kind of new development in material science cannot proceed. This is not the case with electronics--X could go forward with a device that depends upon near-term improvements in computing capability because Moore's law predicts an exponential increase in computing power. That is why DeVaul's team is confident that Google Glass will get less awkward with each passing year. But there is no way to accurately predict when a new material or manufacturing process will be invented. It could happen next year, or it could be 100 years.

The conversation eventually drifts to how the team had at one point debated the pros and cons of taking on teleportation. Yes, like in Star Trek. As with that show's Transporter, the molecules of a person or thing could theoretically be "beamed" across a physical distance with the help of some kind of scanning technology and a teleportation device. None of which really ­exists, of course. Piponi, after some study, concluded that teleportation violates several laws of physics. But out of those discussions came a number of insights--too complicated to explain here--into encrypted communications that would be resistant to eavesdropping, a matter of great interest to Google (especially in light of recent NSA–spying revelations). So bad ideas lead to good ideas, too. "I like to look at these problems as ladders," DeVaul says.

At the moment, the Rapid Eval team is watching the work of certain academics who are ­attempting to create superstrong, ultralight materials.

One Caltech professor, Julia Greer, is working on something called "nanotrusses" that DeVaul is particularly enthusiastic about. "It would completely change how we build buildings," he says. "Because if I have something that's insanely strong and incredibly compact, maybe I could prefabricate an entire building; it fits into a little box, I take it to the construction site, and it unfolds like origami and becomes a building that is stronger than anything we have right now and holds a volume as big as this building." There's a moment of silence in the room.

"I know that sounds completely insane," he adds. But I'm not sure it sounds crazy to him.

At one point, DeVaul asks if I have any ideas of my own for Rapid Eval consideration. I had been warned in advance that he might ask this, and I came prepared with a suggestion: a "smart bullet" that could protect potential shooting victims and reduce gun violence, both accidental and intentional. You have self-driving cars that avoid harm, I say. Why not self-driving ballistics? DeVaul doesn't say it's the stupidest thing he's ever heard, which is a relief. What ensues is a conversation that feels like a rapid ascent up that imaginary ladder. We quickly debate the pros and cons of making guns intelligent (that technology ­already exists to a certain degree) versus making bullets intelligent (likely much more difficult). We move from a specific discussion of "self-­pulverizing" bullets with tiny, embedded hypodermic needles that deliver stun-drugs (DeVaul's idea) to potentially using sensors and the force of gravity to bring a bullet to the ground before it can strike the wrong target (Heinrich's). Then comes the notion of separating the bullet's striker from the explosive charge with a remote disabling electronic switch (Piponi). The tenor soon changes, though. We start talking about smart holsters for police officers, and then intelligent gun sights--­something that firearms owners might actually want to buy. They think that idea might even be worth a rapid prototype. But we also debate the political and marketplace viability of bullet technology--who would purchase it, who would object to it, what kind of impact it might have. Eventually it becomes clear that in many ways, appearances often to the contrary, Google X tries hard to remain on the practical side of crazy.

Obi Felten's official title is Head of Getting Moonshots Ready for Contact With the Real World.

Later in the day, I take a walk around the Google campus with Obi Felten, 41, who is the team member who tries to keep the group grounded. In fact, DeVaul refers to her as "the normal person" in Rapid Eval meetings, someone who can bring everyone back to earth by asking simple questions like, Is it legal? Will anyone buy this? Will anyone like this? Felten is not an engineer; she worked in marketing for Google in Europe before coming to X. "My actual title now," she tells me, "is Head of Getting Moonshots Ready for Contact With the Real World." One thing Felten struggles with is that there's no real template for how a company should bring these kinds of radical technologies to market. ("If you find a model," she says, "let me know.") Fortunately for X, not everything has to evolve into a huge source of revenue. "The portfolio has to make money," Felten explains, but not necessarily each product. "Some of these will be better businesses than others, if you want to measure in terms of dollars. Others might make a huge impact on the world, but it's not a massive market."

Later this year, X hopes to announce a top-secret new project that is likely to fall into that latter category. What will it be? There are no discernible clues. In my own conversations, I could only glean certain hints--that they're extremely curious about transportation and clean energy, and that they are especially serious about creating better medical diagnostics, rather than medical treatments, because they see a far greater impact. At one point, I walked through a Google X user-experience lab, where psychologists gain insights from volunteers trying possible forthcoming technologies. A large object, about, oh, the size of the Maltese Falcon, had been wrapped in black plastic. Go figure.

Meanwhile, consider that X has an overwhelming task on its hands already. The organization must move all of its unveiled projects at least one square ahead this year. Project Loon--which has not finalized a business plan yet--has apparently drawn interest from most of the telecom companies in the world, but is still not technically ready for scaling up. (It was unveiled in part because the patents were about to be made public, and Google preferred to disclose it on its own terms.) Google Glass, the X product closest to commercialization, and self-driving cars, which are much farther away, have both sparked extraordinary public interest, yet it is impossible to say if or when they'll succeed as businesses, or whether they'll have that 10-times impact within a 10-year period.

That evening at dinner with Teller, I bring up all of these issues. To me, the fundamental challenge of fashioning extreme solutions to very big problems is that society tends to move incrementally, even as many fields of technology seem to advance exponentially. An innovation that saves us time or money or improves our health might always have a fighting chance at success. But with Glass, we see a product that seems to alter not only our safety and ­efficiency--like with self-driving cars--but our humanity. This seems an even bigger obstacle than some of the more practical issues that the lab grapples with, but the Xers don't seem overly concerned. Teller, in fact, contends that Glass could make us more human. He thinks it solves a huge ­problem--getting those square rectangles out of our pockets and making technology more usable, more available, less obstructive. But isn't it possible that Glass is the wrong answer to the right problem? "Of course," Teller says. "But we're not done. And it's possible that we missed. I mean, I know we missed in some ways."

The part of the X process that colleagues like Obi Felten think about, he says, is also meant to be iterative. "It's to say to the world: What do you think? How can we make this better? It's part of us being open to being wrong, because it's way easier, and way cheaper, and way more fun to find out now that we missed than to find out years from now, with an incredible amount of additional expense and emotional investment." Teller says he calls X's ideas "moonshots" for a reason. "If one of Google X's projects were a home run, became ­everything we wanted, I would be really happy," he says. "I would be overjoyed if it happened with two."

At one point, I mention my own moonshot to Teller, that smart bullet that DeVaul's team had talked through earlier in the day. It wasn't a disaster, I say, but it wasn't much of a success, either. "Well, that's entirely appropriate," Teller says, sympathetically. "Most ideas don't work out. Almost all ideas don't work out. So it's okay if yours didn't work out." He thinks for a moment. "How about instead of a bullet it delivers a deadly toxin that could be reversed in a week?" It wouldn't stop bad guys immediately, he says, but once they were shot, they would have to go turn themselves in to get the antidote. He mulls it over for a moment more. "I don't know," he says, already seeing the obstacles ahead. "I'm just brainstorming."
[Photos by Zen Sekizawa]

ORIGINAL: FastCo
BY JON GERTNER