lunes, 5 de noviembre de 2018

El poder de la opción B para romper estereotipos

La Dra. Alexandra Olaya Castro es física teórica, y es conocida por su trabajo en física cuántica biomolecular, en particular por su investigación sobre los efectos cuánticos en la fotosíntesis. En 2016 fue galardonada con la Medalla Maxwell del Institute of Physics, una de las mayores distinciones de la física teórica, siendo la primera latinoamericana en obtenerla.

En esta conferencia de TEDxBogotaMujeres, Alexandra Olaya Castro habla sobre estereotipos y propone enfrentarse a ellos para intentar eliminarnos, es decir, propone optar por “la opción B”. En primer lugar, habla de su historia personal, de cómo consiguió estudiar, doctorarse y ganar la Medalla Maxwell procediendo de una familia humilde de Colombia. Y en segundo lugar, habla de su investigación, de la física cuántica biomolecular, y de cómo tomó “la opción B” para investigar en un área interdisciplinar, rompiendo estereotipos en la ciencia.

Yo veo a los estereotipos como agujeros negros sociales, que atrapan la luz de mentes brillantes, de mentes talentosas, de mentes que pueden transformar. 

Edición realizada por Marta Macho Stadler
8 abril, 2018

domingo, 26 de agosto de 2018

Test Tube Artificial Neural Network Recognizes "Molecular Handwriting"

Conceptual illustration of a droplet containing an artificial neural network made of DNA that has been designed to recognize complex and noisy molecular information, represented as 'molecular handwriting.' Credit: Olivier Wyart

Test tube chemistry using synthetic DNA molecules can be utilized in complex computing tasks to exhibit artificial intelligence

Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. The work is a significant step in demonstrating the capacity to program artificial intelligence into synthetic biomolecular circuits.

The work was done in the laboratory of Lulu Qian, assistant professor of bioengineering. A paper describing the research (paywall) appears online on July 4 and in the July 19 print issue of the journal Nature.

"Though scientists have only just begun to explore creating artificial intelligence in molecular machines, its potential is already undeniable," says Qian. "Similar to how electronic computers and smart phones have made humans more capable than a hundred years ago, artificial molecular machines could make all things made of molecules, perhaps including even paint and bandages, more capable and more responsive to the environment in the hundred years to come."

Artificial neural networks are mathematical models inspired by the human brain. Despite being much simplified compared to their biological counterparts, artificial neural networks function like networks of neurons and are capable of processing complex information. The Qian laboratory's ultimate goal for this work is to program intelligent behaviors (the ability to compute, make choices, and more) with artificial neural networks made out of DNA.

"Humans each have over 80 billion neurons in the brain, with which they make highly sophisticated decisions. Smaller animals such as roundworms can make simpler decisions using just a few hundred neurons. In this work, we have designed and created biochemical circuits that function like a small network of neurons to classify molecular information substantially more complex than previously possible," says Qian.

To illustrate the capability of DNA-based neural networks, Qian laboratory graduate student Kevin Cherry chose a task that is a classic challenge for electronic artificial neural networks: recognizing handwriting.

Human handwriting can vary widely, and so when a person scrutinizes a scribbled sequence of numbers, the brain performs complex computational tasks in order to identify them. Because it can be difficult even for humans to recognize others' sloppy handwriting, identifying handwritten numbers is a common test for programming intelligence into artificial neural networks. These networks must be "taught" how to recognize numbers, account for variations in handwriting, then compare an unknown number to their so-called memories and decide the number's identity.

Key to creating biomolecular circuits out of DNA are the strict binding rules between molecules of DNA. A single-stranded DNA molecule is composed of smaller molecules called nucleotides—abbreviated A, T, C, and G—arranged in a string, or sequence. The nucleotides in a single-stranded DNA molecule can bond with those of another single strand to form double-stranded DNA, but the nucleotides bind only in very specific ways: An A nucleotide with a T or a C nucleotide with a G.

Taking advantage of these predictable binding rules, Qian and her colleagues can design short strands of DNA to undergo predictable chemical reactions in a test tube and thereby compute tasks, such as molecular pattern recognition. In 2011, Qian and her colleagues created the first artificial neural network made of DNA molecules that could recognize four simple patterns.

In the work described in the Nature paper, Cherry, who is the first author on the paper, demonstrated that a neural network made out of carefully designed DNA sequences could carry out prescribed chemical reactions to accurately identify "molecular handwriting." Unlike visual handwriting that varies in geometrical shape, each example of molecular handwriting does not actually take the shape of a number. Instead, each molecular number is made up of 20 unique DNA strands chosen from 100 molecules, each assigned to represent an individual pixel in any 10 by 10 pattern. These DNA strands are mixed together in a test tube.

"The lack of geometry is not uncommon in natural molecular signatures yet still requires sophisticated biological neural networks to identify them: for example, a mixture of unique odor molecules comprises a smell," says Qian.

Given a particular example of molecular handwriting, the DNA neural network can classify it into up to nine categories, each representing one of the nine possible handwritten digits from 1 to 9.

First, Cherry built a DNA neural network to distinguish between handwritten 6s and 7s. He tested 36 handwritten numbers and the test tube neural network correctly identified all of them. His system theoretically has the capability of classifying over 12,000 handwritten 6s and 7s—90 percent of those numbers taken from a database of handwritten numbers used widely for machine learning—into the two possibilities.

Crucial to this process was encoding a "winner take all" competitive strategy using DNA molecules, developed by Qian and Cherry. In this strategy, a particular type of DNA molecule dubbed the annihilator was used to select a winner when determining the identity of an unknown number.

"The annihilator forms a complex with one molecule from one competitor and one molecule from a different competitor and reacts to form inert, unreactive species," says Cherry. "The annihilator quickly eats up all of the competitor molecules until only a single competitor species remains. The winning competitor is then restored to a high concentration and produces a fluorescent signal indicating the networks' decision.

Next, Cherry built upon the principles of his first DNA neural network to develop one even more complex, one that could classify single digit numbers 1 through 9. When given an unknown number, this "smart soup" would undergo a series of reactions and output two fluorescent signals, for example, green and yellow to represent a 5, or green and red to represent a 9.

Qian and Cherry plan to develop artificial neural networks that can learn, forming "memories" from examples added to the test tube. This way, Qian says, the same smart soup can be trained to perform different tasks.

"Common medical diagnostics detect the presence of a few biomolecules, for example cholesterol or blood glucose." says Cherry. "Using more sophisticated biomolecular circuits like ours, diagnostic testing could one day include hundreds of biomolecules, with the analysis and response conducted directly in the molecular environment."

The paper is titled "Scaling up molecular pattern recognition with DNA-based winner-take-all neural networks." Funding was provided by the National Science Foundation, the Burroughs Wellcome Fund, and the Shurl and Kay Curci Foundation.


by Lori Dajose

Millimeter-Scale Computers: Now With Deep Learning Neural Networks on Board

Photo: University of Michigan and TSMCOne of several varieties of University of Michigan micro motes. This one incorporates 1 megabyte of flash memory.
Computer scientist David Blaauw pulls a small plastic box from his bag. He carefully uses his fingernail to pick up the tiny black speck inside and place it on the hotel café table. At one cubic millimeter, this is one of a line of the world’s smallest computers. I had to be careful not to cough or sneeze lest it blow away and be swept into the trash.

Blaauw and his colleague Dennis Sylvester, both IEEE Fellows and computer scientists at the University of Michigan, were in San Francisco this week to present ten papers related to these “micro mote” computers at the IEEE International Solid-State Circuits Conference (ISSCC). They’ve been presenting different variations on the tiny devices for a few years.

Their broader goal is to make smarter, smaller sensors for medical devices and the internet of things—sensors that can do more with less energy. Many of the microphones, cameras, and other sensors that make up eyes and ears of smart devices are always on alert, and frequently beam personal data into the cloud because they can’t analyze it themselves. Some have predicted that by 2035, there will be 1 trillion such devices. “If you’ve got a trillion devices producing readings constantly, we’re going to drown in data,” says Blaauw. By developing tiny, energy efficient computing sensors that can do analysis on board, Blaauw and Sylvester hope to make these devices more secure, while also saving energy.

Photo: University of Michigan/TSMCMade of multiple layers of computing.

At the conference, they described micro mote designs that use only a few nanowatts of power to perform tasks such as distinguish the sound of a passing car and measuring temperature and light levels. They showed off a compact radio that can send data from the small computers to receivers 20 meters away—a considerable boost compared to the 50 centimeter range they reported last year at ISSCC. They also described their work with TSMC on embedding flash memory into the devices, and a project to bring on board dedicated, low-power hardware for running artificial intelligence algorithms called deep neural networks.

Blaauw and Sylvester say they take a holistic approach to adding new features without ramping up power consumption. “There’s no one answer” to how the group does it, says Sylvester. If anything, it’s “smart circuit design,” Blaauw adds. (They pass ideas back and forth rapidly, not finishing each other’s sentences but something close to it.)

The memory research is a good example of how the right tradeoffs can improve performance, says Sylvester. Previous versions of the micro motes used 8 kilobytes of SRAM, which makes for a pretty low-performance computer. To record video and sound, the tiny computers need more memory. So the group worked with TSMC to bring flash memory on board. Now they can make tiny computers with 1 megabyte of storage.

Flash can store more data in a smaller footprint than SRAM, but it takes a big burst of power to write to the memory. With TSMC, the group designed a new memory array that uses a more efficient charge pump for the writing process. The memory arrays are a bit less dense than TSMC’s commercial products, for example, but still much better than SRAM. “We were able to get huge gains with small trade-offs,” says Sylvester.

Another micro mote they presented at the ISSCC incorporates a deep-learning processor that can operate a neural network while using just 288 microwatts. Neural networks are artificial intelligence algorithms that perform well at tasks such as face and voice recognition. They typically demand both large memory banks and intense processing power, and so they’re usually run on banks of servers often powered by advanced GPUs. Some researchers have been trying to lessen the size and power demands of deep-learning AI with dedicated hardware that’s specially designed to run these algorithms. But even those processors still use over 50 milliwatts of power—far too much for a micro mote. The Michigan group brought down the power requirements by redesigning the chip architecture, for example by situating four processing elements within the memory (in this case, SRAM) to minimize data movement.

The idea is to bring neural networks to the internet of things. “A lot of motion detection cameras take pictures of branches moving in the wind—that’s not very helpful,” says Blaauw. Security cameras and other connected devices are not smart enough to tell the difference between a burglar and a tree, so they waste energy sending uninteresting footage to the cloud for analysis. On-board deep-learning processors could make better decisions, but only if they don’t use too much power. The Michigan group imagine deep-learning processors could be integrated into many other internet-connected things besides security systems. For example, an HVAC systems could decide to turn the air conditioning down if they see multiple people putting on their coats.

After demonstrating many variations on these micro motes in an academic setting, the Michigan group hopes they will be ready for market in a few years. Blaauw and Sylvester say their start-up company CubeWorks is currently prototyping devices and researching markets. The company was quietly incorporated in late 2013. Last October, Intel Capital announced they had invested an undisclosed amount in the tiny computer company. 

Posted 10 Feb 2017

With Synthetic Biology Software, Geneticists Design Living Organisms From Scratch

Image: Chris Bickel
The first time geneticist Jef Boeke designed a synthetic chromosome, he sometimes wrote and edited its DNA sequence in a Microsoft Word document.

His goal was to create a slightly altered version of yeast chromosome 9, the shortest of the 16 chromosomes that make up the organism’s genome and contain all the operating instructions for life. He started with the short chromosome’s right arm, but even this task was daunting. Its DNA code consisted of 90,000 “letters,” the molecules referred to as A, C, G, and T that are arranged in particular sequence to encode biological function.

Painstakingly, Boeke went through the code, making changes that he thought would be scientifically interesting or that would make the chromosome more stable. This misery drove him to seek help from student Sarah Richardson in his neighbor Joel Bader’s lab, who wrote scripts to automate some of the most tedious steps. This was the embryonic beginning of what was to become the genome design software called BioStudio.

Once Boeke finished his design, the synthetic chromosome was constructed by taking short snippets of manufactured DNA and stringing them together. Then Boeke’s team checked the design by taking a normal yeast cell, swapping out its natural chromosome 9, and looking to see if it would keep functioning with a manmade chromosome inside. Nobody knew if it would work.

It did. The results were published in Nature in 2011, and the quest to build synthetic critters from scratch took a big step forward. Boeke’s team prepared to design the other 15 chromosomes to make a completely synthetic yeast—and the world’s first completely synthetic complex organism.

But the manual approach wasn’t scalable. The chromosome 9 project had involved 90,000 letters, a length denoted as 90 kb. The overall yeast genome was 12 million letters long, or 12Mb. “It was obvious right away that we needed something much more heavyweight,” Boeke says.

The results of their solution are now on display in the journal Science, which yesterday published seven papers from the synthetic Yeast 2.0 project. One of those papers describes their breakthrough enabling technology, the custom-built software program BioStudio.

Boeke, who leads the yeast project and serves as director of NYU’s Institute for Systems Genetics, oversaw the genome design. The papers published today describe that design process using BioStudio and also report on the completion of five new chromosomes by collaborators from around the world.

BioStudio allowed Boeke’s team to take the normal yeast genome and make the deletions, insertions, and changes they wanted, making genetic tinkering as easy as cut and paste. The program also includes a version control feature akin to Word’s track changes, recording each edit of the genome so it can easily be reversed if it’s later found to be detrimental to the yeast’s survival.

Nothing like BioStudio existed when Boeke asked Richardson to help him out. Then a PhD candidate at Johns Hopkins University (and now chief scientific officer at the synthetic biology startup MicroByre), Richardson says existing software focused on displaying long genome sequences and allowing researchers to annotate them as they laboriously figured out the purpose of various strings of DNA. When she asked around about adding an editing function to let researchers change those intricate sequences, she got shocked responses. “You would have thought I’d suggested abandoning a toddler at the mall,” she says.

Richardson worked with Boeke to create a genome editing software that was wrapped in a user-friendly web interface called Gbrowse. For a while, Boeke was the software’s only user, and he provided Richardson with plenty of frank feedback. “I’d say, it’s way too slow, it’s killing me!” he remembers. They achieved one big speed-up when they realized that every edit—even the insertion of just a few letters—was causing a cascade of updates throughout the entire genome. By localizing the update, Boeke says, the editing process got about 15 times faster. The BioDesign software makes genetic tinkering as easy as cut and paste.

Once BioStudio was fully up and running, Boeke’s team designed the full genome of what they call Sc2.0, referencing the scientific name for brewer’s yeast, Saccharomyces cerevisiae. Overall, their Sc2.0 genome design is 8 percent shorter than the original yeast genome, and it includes 1.1 Mb (or roughly a million) changes.

After finalizing this initial design, they asked collaborators around the world to take on the project of building specific chromosomes. They knew the design would continue to morph, as some of their initial changes would prove infeasible. But they also knew that all edits made by their collaborators would be captured in track changes.

The original edits came from a long list, Boeke says. “We spent something like eight months debating what changes to put on the list,” he says. “It’s fundamentally an arbitrary list of genetic changes we thought would be interesting.” But the team had to be careful not to push it too far: “We knew that with every change we made, we’d increase the risk that we’d kill the yeast,” he says.

BioStudio enabled the designers to make some major edits easily, explains Leslie Mitchell, a postdoc researcher in Boeke’s lab who took the lead on much of the genome design. With single keystrokes, she could make changes that would affect all the DNA in a chromosome. Some of these system-wide changes removed repetitive segments of DNA or took out pieces called transposons that make genomes more prone to mutation. Another added “watermarks” that would show up when the synthetic DNA was added to a normal yeast cell, making it obvious which parts of the cell were human-made.

After such broad-scale edits were done, Mitchell says, the designers could go in and look at each chromosome’s sequence in detail, making expert decisions about where they wanted to make further changes. Overall, she estimates, it took about an hour to edit 100 kb of DNA, so the 500-kb chromosome 5 took about 5 hours to design.

Richardson, the coder, remembers that the researchers had one more big ask for BioStudio, which had to do with DNA assembly. While synthetic biology companies now make it easy to order custom strings of manufactured DNA, those strings are typically fairly short. For the synthetic yeast project, the researchers would order strings of DNA that were only about 70 letters, or base-pairs, long. When those strings arrived in the lab, the researchers first assembled them into “building blocks” of about 750 bp, then put those building blocks together into into 2-4 kb “minichunks,” then constructed 10 kb “chunks,” and finally built 30-60 kb “megachunks.” Synthetic biology lends itself to engineering’s classic “design-build-test” cycle.

But there are genetic constraints on how strings of DNA can be assembled. The researchers wanted BioStudio to take any long DNA sequence and make it “modular,” chopping it up into pieces that could be ordered from the DNA-makers and then patched together in that series of assembly steps. “They wanted to be able to push a button when they were done with their edits, and have the genome slot itself into an assembly pattern,” Richardson remembers. “That was the craziest thing they asked for.”

Synthetic biology lends itself to engineering’s classic “design-build-test” cycle. For Sc2.0, megachunks of the designer chromosomes were built and inserted into normal yeast cells to test whether they interfered with its life functions. If the yeast cell died or displayed abnormal behavior, the researchers embarked on a debugging process.

In one type of debugging, they would make many yeast colonies with many different combinations of synthetic megachunks and watch to see which colonies failed, then look for the common denominator in those failures. Mitchell, who led the work on designing and debugging chromosome 6, explains that there were different sorts of bugs. The most interesting were those that arose from genome changes they’d made that they expected to be harmless—because those bugs taught the researchers something about yeast biology.

Boeke says that so far, the team has found a bug in their genome design roughly every 300 kb. “But there may be more, we may not have found them all yet!” he says. With most of the synthetic yeast chromsomes still under construction, he’s still expecting surprises. “It’s like when you release code and wait for the user feedback,” he says. Geneticists need software to help them explore this new design frontier: the design of life itself.

The synthetic yeast project is on track to complete all 16 chromosomes by the end of 2017. Then the team will turn to the task of putting all the chromosomes into a single cell, and seeing if it still functions as a yeast cell should. That process may yield still more bugs, Mitchell says. “It might be that individual changes on two chromosomes are well tolerated, but they don’t work when you put them together,” she says. “We may potentially have to track bugs across chromosomes.”

While BioStudio has been invaluable for the synthetic yeast project, the researchers aren’t sure whether it will be useful for other synthetic biology projects. “If you want to make the kinds of changes we made for yeast, it’s very straightforward,” says Mitchell, “but for other types of changes you’d have to write the code.” The software is open source, she notes, so interested parties could build on it.

Whether it’s BioStudio or another program, the fast-growing field of synthetic biology will need software to help geneticists explore this new design frontier: the design of life itself.

Some synthetic biology startups are trying to adapt simple organisms like yeast to make them produce useful products, such as biofuels, vaccines, or even perfume. Other researchers are more interested in constructing whole critters from scratch, in hopes of gaining new insights into the mechanics of life in the process.

The first completely synthetic genome was bacterial, constructed at the J. Craig Venter Institute in 2010; its single-chromosome measured 1 Mb in length. From that start, the 12-Mb yeast genome marks a big step up. And Boeke is part of a group that has proposed to scale up considerably from the single-celled yeast. Last June they called for the creation of a synthetic human genome as part of a massive project to develop DNA assembly technology; they published an article in the journal Science that suggested a $100 million investment to get the project off the ground.

The human genome clocks in at 3 billion letters, or 3 Gb. To tackle that project, genome designers and coders may have to get together for a Synthetic Bio Hackathon.
Learn More BioDesigngenomesoftwaresynthetic biologyyeast

By Eliza Strickland
Posted 10 Mar 2017 | 20:30 GMT

jueves, 23 de agosto de 2018

Descubren nueva especie animal en la Sierra Nevada de Santa Marta

Son los microorganismos más resistentes en el planeta por sus cualidades físicas y anatómicas

Expertos investigadores señalan que los Tardígrados o también denominados ‘Ositos de Agua’, podrían ayudar a descifrar los mecanismos que utilizan estos organismos para preservar intacto su estructura celular y su ADN cuando están en estado seco, buscando crear nuevas alternativas para conservar, por ejemplo, órganos humanos.
Esta es la representación de un tardígrado a gran escala. Estos animales tienen una talla media de 5 milímetros.
Foto: Roger Urieles

El Grupo de Investigación en Manejo y Conservación de Fauna y Flora y Ecosistemas Estratégicos Neotropicales MIKU, dirigido por el doctor Sigmer Quiroga Cárdenas y conformado por docentes y estudiantes de la Universidad del Magdalena, realizó el hallazgo de seis nuevas especies de Tardígrados provenientes de la Sierra Nevada de Santa Marta

El descubrimiento parte de las investigaciones relacionadas con el estudio de la biodiversidad en el ecosistema estratégico. En salidas de campo realizadas en el marco de varios proyectos de investigación, se recolectaron muestras de musgos y líquenes en donde fueron hallados los tardígrados, con los que posteriormente se realizaron micropreparados para ser analizados por el grupo de expertos y estudiantes de MIKU. 

“Estamos colocando a disposición del país y el mundo científico, especies con las que podemos investigar los mecanismos de tolerancia que serán de gran utilidad para la ciencia”, indicó Sigmer Quiroga, director del grupo de Investigación MIKU.

Las nuevas especies (Bryodelphax kristenseni, Doryphoribius rosanae, Itaquascon pilatoi, Milnesium kogui, Minibiotus pentannulatus, Paramacrobiotus sagani), hacen parte de los cerca de siete mil ejemplares de Tardígrados de diferentes géneros depositados en el Centro de Colecciones Biológicas de la Universidad del Magdalena, constituyéndose en la colección más grande de estos organismos en Colombia.

El docente de la Alma Mater asegura que el estudio de la capacidad de resistencia de estos animales microscópicos puede ser en diferentes campos de interés humano como la medicina, farmacéutica e ingeniería de alimentos, entre otros. “Estos organismos acuáticos cuando el agua escasea, poseen la capacidad de entrar en un estado de anhidrobiosis en el cual pueden resistir a condiciones muy extremas de temperaturas o niveles de radiación que serían letales para otros organismos.” señaló. 

Agrega que “si se descifran los mecanismos que utilizan estos organismos para preservar intacto su estructura celular y su ADN cuando están en estado seco, podrían crearse nuevas alternativas para conservar, por ejemplo, órganos humanos”. 

Los tardígrados: un animal ‘indestructible’
Reconocidos por ser el animal más resistente en el planeta por sus cualidades físicas y anatómicas, los tardígrados o también llamados 'ositos de agua' son el centro de estudio de este grupo de investigación adscrito a la Vicerrectoría de Investigación de la Alma Mater.

Su nombre hace referencia al movimiento particular de estos animales, son caminantes lentos que alcanzan a medir entre 1 y 2 milímetros pero lo más importante es su capacidad para sobrevivir a las condiciones más adversas

El estudio de los tardígrados ha sido esporádico en el país, siendo la Universidad del Magdalena, por intermedio del Grupo de Investigación MIKU, pionero y líder en la investigación de estas especies en la Sierra Nevada de Santa Marta por más de una década.

Este colectivo investigador está conformado por los biólogos egresados de la Alma Mater: Rosana Londoño, Anisbeth Daza y Martin Caicedo, al igual que los estudiantes: Natalia Cantillo (UNIMAGDALENA) y Dayanna Venencia (Uniatlantico), acompañados por el doctor Oscar Lisi de la Universidad de Catania, Italia.

Los tardígrados se encuentran actualmente representados por más de 1200 especies, clasificadas en dos clases: Heterotardigrada y Eutardigrada. A través de campañas informativas, el Grupo de Investigación MIKU busca dar a conocer la importancia de estos animales en los ecosistemas y su uso como modelos biológicos para el estudio de los mecanismos de extremotolerancia.

ORIGINAL: UniMagdalena
21/08/2018 04:56 PM por Dirección de Comunicaciones

lunes, 20 de agosto de 2018

CMU Engineers Find Innovative Way to Make a Low-Cost 3D Bioprinter


Starting with a MakerBot 3D printer, researchers tapped open-source hardware and software to build an affordable piece of tech that can print laboratory-grown cells on a large scale.

While 3D printers have already caused quite a buzz in the healthcare field — facilitating difficult surgeries and opening the door to low-cost prosthetics — the concept of bioprinting on a large scale has eluded the industry for the most part. But a recent breakthrough from Carnegie Mellon University’s College of Engineering could change all that.

Bioprinting, or printing laboratory-grown cells in order to form living structures, has the ability to profoundly transform healthcare.

The approach could revolutionize regenerative medicine, enabling the production of complex tissues and cartilage that would potentially support, repair or augment diseased and damaged areas of the body,Science Daily reports.

While researchers from the Massachusetts Institute of Technology and elsewhere have been digging into how to facilitate low-cost bioprinting, these options are often limited in scale or availability. The new open-source and low-cost solution from CMU, which makes use of a standard desktop 3D printer, could open the door to printing biomaterials, like artificial human tissue, and fluids on a larger scale, according to a new paper released by CMU.

Bioprinting has historically been limited in volume, so essentially the goal is to just scale up the process without sacrificing detail and quality of the print,” Kira Pusch, an author of the paper and a recent graduate of CMU’s Materials Science and Engineering undergraduate program, tells CMU’s news site. “What we’ve created is a large volume syringe pump extruder that works with almost any open source fused deposition modeling (FDM) printer. This means that it’s an inexpensive and relatively easy adaptation for people who use 3-D printers.

Open-Source Tools Lead to a ‘Democratizing’ Bioprinter
What makes the CMU bioprinting method unique is a technique the lab developed called Freeform Reversible Embedding of Suspended Hydrogels (FRESH) 3D bioprinting that is designed to specifically print “soft and living materials,” Adam Feinberg, another author of the paper and an associate professor of materials science and biomedical engineering at CMU, tells Robotics Tomorrow. The technique essentially prints the tissue in a gel that is later carefully melted away to ensure the cells remain viable.

Feinberg notes that the technique is capable of printing a wide range of cells “including collagen and other extracellular matrix proteins,” representing most tissue in the body.

Usually there’s a trade-off, because when the systems dispense smaller amounts of material, we have more control and can print small items with high resolution, but as systems get bigger, various challenges arise,” Feinberg, who is also a member of the Bioengineered Organs Initiative at Carnegie Mellon, tells CMU’s news site. “The [large-volume extruder (LVE)] 3-D bioprinter allows us to print much larger tissue scaffolds, at the scale of an entire human heart, with high quality.

The lab began its journey toward large-scale and low-cost bioprinting after it purchased a MakerBot 3D printer. Over the course of six years, researchers modified the printer using open-source hardware and software. In the spirit of that endeavor, the team has made its designs for the printer open source, hoping to further collaboration and discovery in the medical field.

Essentially, we’ve developed a bioprinter that you can build for under $500, that I would argue is at least on par with many that cost far more money,” Feinberg tells CMU’s news site. “Most 3-D bioprinters start between $10,000 and $20,000. This is significantly cheaper, and we provide very detailed instructional videos. It’s really about democratizing technology and trying to get it into more people’s hands.

Juliet is the senior web editor for StateTech and HealthTech magazines. In her six years as a journalist she has covered everything from aerospace to indie music reviews — but she is unfailingly partial to covering technology.

You Should Know These 20 Technology Leaders Driving China's A.I. Revolution

China’s leading technology companies are on fire, heavily investing in artificial intelligence and building true global presences. McKinsey recently reported that academic and research institutions in the country publish more cited research papers than the US, UK, or any other global leader in AI, producing nearly 10,000 papers in 2015 alone.

Backed by strong government mandates and billions of dollars of both private and public investments, China is challenging the US for position of global AI leader. Fearful of competition, the US government is considering placing restrictions on Chinese investments in AI and technology in the United States. In many sectors, such as healthcare, China may already be ahead of America in applying AI to critical public issues.

You might recognize names like Andrew Ng, Sebastian Thrun, Geoffrey Hinton, or Yann LeCun as important figures in AI, but few Westerners can name the key leaders driving AI innovation in China and at Chinese companies globally. These executives, entrepreneurs, professors, and researchers helm the most important Chinese tech companies and research labs and are respected widely for their technical expertise and accomplishments.

We’ve researched and curated 20 of the most important figures in the Chinese AI landscape that you should know

Co-Founder of Sinovation Ventures, Former President of Google China

Kai-Fu Lee is a globally recognized technology leader with executive experience at Apple, Microsoft, and Google. He got his BS in Computer Science from Columbia University and his PhD from Carnegie Mellon. Lee established Google China prior to co-founding Sinovation Ventures, a venture capital firm actively funding technology and AI startups in the US and China.

With celebrity status in China and over 50 million followers on Chinese social networks, Lee has become an oracle in predicting trends in Chinese tech. Lee told CNBC recently that artificial intelligence is the “singular thing that will be larger than all of human tech revolutions added together, including electricity, the industrial revolution, internet, and mobile internet.

2. QI LU
Group President & COO, Baidu

Qi Lu was hired by Baidu to lead the company’s strategic efforts in AI and push forward integration and collaboration within the company. Every Baidu business unit, including AI teams working on autonomous driving, reports to Lu. A spokesperson from Baidu stated: “With Dr. Lu on board, we are confident that our strategy will be executed smoothly and Baidu will become a world-class technology company and global leader in AI.

Prior to joining Baidu, Lu was personally recruited by Steve Ballmer to join Microsoft where he eventually became EVP of the Applications & Services Group. Lu started his professional career in IBM’s research labs, before joining Yahoo and rising to EVP of the Search & Advertising Group. He completed a BS in Computer Science at Fudan University and was invited by Carnegie Mellon professor Edmund M. Clarke to pursue his PhD at CMU.

Head of AI Group, Baidu

After Andrew Ng’s departure from Baidu, Haifeng Wang took over as leader of the expanded AI Group (AIG), consisting of
  • Baidu’s Institute of Deep Learning, 
  • Big Data Lab, 
  • Silicon Valley AI Lab, 
  • Augmented Reality Lab, 
  • Natural Language Unit, 
  • AI Platform Unit, and 
  • a few other departments.
Wang’s technical specialty is natural language processing (NLP) and machine translation and he has authored over 100 academic papers in AI. He applies his expertise to Baidu’s efforts in
  • NLP, 
  • computer vision, 
  • speech recognition, 
  • knowledge graphs, 
  • personalized recommendations, and 
  • deep learning. 
Wang is also an adjunct professor at Harbin Institute of Technology where he received his BS, MS, and PhD degrees in Computer Science.

Executive Director of AI Lab, Tencent

The battle for top AI talent is incredibly fierce. Tong Zhang was poached from Baidu by Tencent last year to lead Tencent’s newly established AI lab. Formerly he was head of Baidu’s Big Data Lab, worked at IBM and Yahoo, and was a professor at Rutgers University.

With a team of over 200 engineers, Zhang is focused on developing Tencent’s capabilities in machine learning, computer vision, speech recognition, and natural language processing and applying new AI technologies to the company’s vast array of popular consumer products like WeChat.

Chief Scientist and Vice President of Alibaba Cloud, Alibaba

Alibaba Cloud launched in 2009 and is now Alibaba’s fastest growing business unit. Similar to Amazon Web Services (AWS), Alibaba Cloud, also called Aliyun, emerged out of the company’s need for enormous computing power to handle millions of online shopping transactions.

Jingren Zhou leads big data and AI research at Alibaba Cloud’s Institute of Data Science Technology (iDST). In this role, he drives Alibaba’s AI technologies in speech, natural language, image and video processing, and large-scale machine learning.

Prior to joining Alibaba, Zhou was an engineering manager at Microsoft in charge of developing the big data computation platform supporting Windows, Office, and Bing. He received his BS from the University of Science and Technology of China and his PhD in Computer Science from Columbia University.

President, DiDi Research

DiDi Chuxing is the “Uber of China”, with over 50TB of real-time data and over 9 billion routes driven per day. DiDi Research, the “brains of Didi Chuxing”, is a machine learning research institute set up by the company to predict demand, reduce surge impact, and also develop self-driving car technology.

President Xiaofe He got his BS in Computer Science from Zhejiang University and PhD from University of Chicago. Prior to helming DiDi Research, he worked as a research scientist and President of Yahoo Research Labs and joined Zhejiang University as a professor focused on applying mathematics and data analysis to solve important problems in pattern recognition, multimedia, and computer vision.

Head of Baidu Research, Baidu

As Head of Baidu Research, Yuanqin Lin manages Baidu’s research labs, which include the

Along with Wei Xu, he will be leading Baidu’s contributions to China’s government-funded National Engineering Laboratory of Deep Learning Technology that is co-helmed by Tsinghua & Beihang University.

Prior to Baidu, Lin was the head of Media Analytics at NEC Labs America where he led teams focusing on computer vision research for mobile search and driverless cars. Lin received his MS degree in Optical Engineering from Tsinghua University and his PhD in Electrical Engineering from University of Pennsylvania.

President & CTO, Xiaoi

Xiaoi is China’s leading platform for conversational AI, powering the majority of the country’s bot and virtual assistant experiences. Established in Shanghai in 2001, the company’s technologies are used by hundreds of medium to large enterprises, government entities, and over 500 million users collectively.

Pinpin Zhu’s numerous patents in the space – including ones for “Chatting Robot System” and “SMS Robot System” – drove Xiaoi’s technical dominance in conversational interfaces. In addition to running Xiaoi, Zhu is also a Doctor of Science at the Chinese Academy of Sciences, has been appointed to China National Information Technology Standardization Committee, and has received numerous awards and accolades for his contributions to the field.

Distinguished Scientist, Baidu

In a company full of highly credentialed scientists, researchers, and engineers, Wei Xu is the only one with the title “Distinguished Scientist”. He is highly respected for his technical chops within the company due to his work on PaddlePaddle, a deep learning toolkit which was open sourced in late 2016. In development for over three years, PaddlePaddle is used to power search rankings, targeted advertising, image classification, translation, and self-driving cars.

Xu received his Bachelor’s degree at Tsinghua University, his MS from Carnegie Mellon, and was previously a researcher at NEC Labs and Facebook before joining Baidu.

Principal Data Scientist, Alibaba

Wanli Min led the research and development of Alibaba Cloud’s (Aliyun) artificial intelligence system, named Little Ai. Ai has been deployed by Alibaba internally to support customer service and traffic pattern predictions for the company’s flagship e-commerce business. Min also used machine learning to predict the winner of a top-rated Chinese reality TV show called “I Am Singer” and helped city planners in Guangdong province optimize traffic lights in real-time to reduce congestion.

Min entered college at the age of 14 and received his Bachelors from the University of Science & Technology of China and a PhD in Statistics from the University of Chicago.

General Manager of Duer, Baidu

Duer is Baidu’s answer to Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Assistant. The conversational AI platform powers virtual assistant capabilities in a number of devices, ranging from XiaoYu, China’s version of the Amazon Echo, to voice-activated smart televisions.

Kun Jing leads the Duer business unit. Prior to joining Baidu, Jing was Microsoft’s R&D Director and created Xiaoice, a popular chatbot that went viral on Tencent’s WeChat and Sina’s Weibo. Xiaoice has over 20 million registered users who interact with the bot an average of 60 times a month, earning it the rank of Weibo’s top influencer.

Deputy Director of AI Lab, Tencent

Hired as deputy head of Tencent’s AI Lab, Dong Yu co-runs the new lab with Tong Zhang and spearheads research in speech recognition and natural language understanding. Prior to joining Tencent, Yu was the principal researcher at Microsoft Research Institute’s Speech and Dialog Group, an adjunct professor at Zhejiang University, a visiting professor at University of Science and Technology of China, and a visiting researcher at Shanghai Jiao Tong University. He received a Bachelor’s in Electrical Engineering from Zhejiang University and a PhD in Computer Science from Idaho University.

“I’m excited to join AI Lab,” Yu shares. “Over the past decade, Tencent has accumulated abundant experience in application scenarios, developed a massive data bank, established powerful computing capabilities, and built an outstanding team of technology experts; all which have helped form the foundation of in-depth research and AI application at Tencent today.”

Director of Silicon Valley AI Lab, Baidu

Coates received his BS, MS, and PhD degrees in Computer Science from Stanford University and has worked on everything from computer vision for autonomous cars, deep learning for speech recognition, and machine learning for helicopter acrobatics. At Baidu, he worked on DeepSpeech, a speech recognition and transcription engine that performs as well as native Mandarin speakers, and DeepVoice, a text-to-speech synthesis engine that generates believable human-like audio.

Coates is particularly excited about putting AI in the hands of real-world consumers. When he was selected by MIT Technology Review as one of 35 Innovators Under 35 in 2015, he explained that “in rapidly developing economies like in China, there are many people who will be connecting to the Internet for the first time through a mobile phone. Having a way to interact with a device or get the answer to a question as easily as talking to a person is even more powerful to them. I think of Baidu’s customers as having a greater need for artificial intelligence than myself.

14. KAI YU
Founder & CEO, Horizon Robotics

Formerly head of Baidu’s Institute of Deep Learning, Kai Yu left Baidu to start Beijing-based startup Horizon Robotics. Funded by leading investors like Yuri Milner and Sequoia Capital, Yu’s mission is to become the “Android of Robotics,” a pervasive AI system that powers all of our smart devices. Unlike other Chinese tech giants which dominate in the cloud, Horizon aims to adapt AI to every piece of hardware in the physical world.
Horizon has launched two platforms to date: 
  • Anderson for smart homes and 
  • Hugo for smart driving. 
Anderson imbues home appliances with capabilities such as facial recognition and automatic ordering, while Hugo is an advanced driver assistance system that performs real-time pedestrian and object detection even in adverse weather conditions.

Yu received his BS and MS degrees in Electrical Engineering from Nanjing University and his PhD in Computer Science from Ludwig-Maximilians Universitat Munchen in Germany.

Former Senior Vice President of Engineering, Baidu

While at Baidu, Jing Wang managed over 5,000 engineers in numerous business units, including the ones he founded:
  • Mobile, 
  • Cloud Computing, 
  • Big Data, 
  • Cybersecurity, 
  • Baidu Research, and 
  • Autonomous Driving. 
He left the company shortly after Andrew Ng’s resignation to start his own self-driving car company, and is widely credited with driving forward Baidu’s progress in the space.

Prior to joining Baidu, Wang was Deputy Head of Google’s Shanghai engineering office as well as eBay China’s CTO and R&D general manager. He received his Bachelor’s from the University of Science & Technology of China and his Master’s in Computer Science from the Chinese Academy of Sciences.

Professor of Computer Science and Technology, Tsinghua University

As a professor at Tsinghua University, Bo Zhang’s research interests include AI, machine learning, pattern recognition, knowledge engineering, and robotics. His notable academic achievements include advances in robotic task and motion planning, probabilistic logic neural networks (PLN), and machine learning algorithms for image retrieval and classification and webpage structure mining.

Along with Baidu and Wei Li of Beihang University, Tsinghua was selected to co-lead the government-funded National Engineering Laboratory of Deep Learning. He is a member of the Chinese Academy of Sciences and received his Bachelor’s in Automatic Control from Tsinghua University.

17. HUA WU
Technical Chief of NLP Group, Baidu

Hua Wu contributed a number of technical breakthroughs in 
  • natural language processing (NLP), 
  • dialogue systems, and 
  • neural machine translation (NMT) 21
in her seven year tenure at Baidu. The New York Times hailed her research work in multi-task learning as “pathbreaking” and she was able to successfully deploy her invention at scale to hundreds of millions of users of Baidu’s translation products. Wu is also responsible for the technology behind Baidu’s conversational AI, Duer.

Wu received her PhD from the Chinese Academy of Sciences and co-chairs leading academic AI conferences such as ACL and IJCAI.

18. WEI LI
President and Professor of Computer Science, Beihang University

Along with Bo Zhang of Tsinghua University and senior executives from Baidu, Wei Li was selected to co-lead China’s National Engineering Laboratory of Deep Learning. He is a member of the Chinese Academy of Sciences and also president of Beihang University. Li has won numerous accolades and prizes for his technical contributions in artificial intelligence and network computing.

Li graduated from the Department of Mathematics and Mechanics of Beijing University and received his PhD in Computer Science from the University of Edinburgh.

Professor of Machine Learning, Peking University 

China hopes to leap-frog the US and other Western countries by vast and fast investment in the AI industry,says Hongbin Zha, AI researcher and professor at Beijing’s Peking University. Zha directs the Key Lab of Machine Perception at Peking University and collaborates with Microsoft Research Asia alongside other AI leaders across the continent. His research interests include computer vision theory, virtual reality, and robotics.

He received his Bachelor’s degree in Electrical Engineering from Hefei University of Technology in China and his MS and PhD degrees in Electrical Engineering from Kyushu University in Japan.


In 2015, Yunji Chen was selected by MIT Technology Review as one of their top 35 Innovators Under 35. Described as “iconoclastic and cosmopolitan”, he was chosen for his work in designing specialized deep-learning processors which dramatically reduce the computational costs of large-scale machine learning. His dream is to enable even common cell phones to be “as powerful as Google Brain”.

Chen entered college at age 14 and completed his PhD with lightning speed by the age of 24. He’s now chief architect of the Godson-3C, a microprocessing chip that reduces energy requirements for computers to recognize objects and translate languages and is developing the Cambricon, a brain-inspired processor chip that models human nerve cells and synapses to facilitate deep learning. The research team is led by Chen and his younger brother, Tianshi Chen, two of the youngest professors at the Chinese Academy of Sciences.

Adelyn is the Head of Marketing at TOPBOTS. She's got a decade of experience growing billion-dollar companies like Eventbrite, NextDoor, and Amazon. Follow her on Twitter at @adelynzhou to learn how to accelerate your growth with AI.

Jun 18, 2017