Each year at SC, the ACM hands out one of the most coveted awards, the Gordon Bell Prize. The award, which became a regular feature of SC, began in 1987 and now carries a $10,000 prize sponsored by parallel computing luminary, Gordon Bell. Winners demonstrate high peak performance figures on real world applications or demonstrate other performance-geared achievements, including incredible advances in scaling, time to solution of scientific applications or other feats of HPC might.
This year the Gordon Bell award, chosen because of its demonstration of a high performance application went to “11 PFLOP/s Simulations of Cloud Cavitation Collapse,” by Diego Rossinelli, Babak Hejazialhosseini, Panagiotis Hadjidoukas and Petros Koumoutsakos, all of ETH Zurich, Costas Bekas and Alessandro Curioni of IBM Zurich Research Laboratory, and Steffen Schmidt and Nikolaus Adams of Technical University Munich.
The researchers, in collaboration with the Technical University of Munich and LLNL broke some serious computational fluid dynamics ground in their simulation, which maneuvered 6.4 million threads on the IBM Sequoia system. The simulation, according to IBM, stands as the “largest simulation ever in fluid dynamics by employing 13 trillion cells and reaching an unprecedented, for flow simulations, 14.4 petaflop sustained performance on Sequoia—73% of the supercomputer’s theoretical peak.”
The bubble bursting exercises are more than just interesting to watch in action. These simulations model complex events related to clouds of collapsing bubbles, which can yield new insight in manufacturing, medicine and beyond as scientists seek to understand how they might “shatter” tumors, kidney stones or even fuel injection fluid interactions.
The researchers described their award-winning effort by pointing to how the “destructive power of cavitation reduces the lifetime of energy critical systems such as internal combustion engines and hydraulic turbines, yet it has been harnessed for water purification and kidney lithotripsy.” They go on to note that they were able to “advance by one order of magnitude the current state-of-the-art in terms of time to solution, and by two orders the geometrical complexity of the flow. The software successfully addresses the challenges that hinder the effective solution of complex flows on contemporary supercomputers, such as limited memory bandwidth, I/O bandwidth and storage capacity.”
“We were able to accomplish this using an array of pioneering hardware and software features within the IBM BlueGene/Q platform that allowed the fast development of ultra-scalable code which achieves an order of magnitude better performance than previous state-of-the-art,” said Alessandro Curioni, head of mathematical and computational sciences department at IBM Research – Zurich. “While the Top500 list will continue to generate global interest, the applications of these machines and how they are used to tackle some of the world’s most pressing human and business issues more accurately quantifies the evolution of supercomputing.”
As IBM noted, These simulations are one to two orders of magnitude faster than any previously reported flow simulation. The last major achievement was earlier this year by a team at Stanford University which broke the one million core barrier, also on Sequoia.
This year the prize committee clarified their description of what it takes to render a winner, including the following criteria:
The prize winner is not selected simply on raw performance numbers. Rather, the Prize Committee seeks:
ORIGINAL: HPC Wire
Nicole Hemsoth
November 22, 2013
This year the Gordon Bell award, chosen because of its demonstration of a high performance application went to “11 PFLOP/s Simulations of Cloud Cavitation Collapse,” by Diego Rossinelli, Babak Hejazialhosseini, Panagiotis Hadjidoukas and Petros Koumoutsakos, all of ETH Zurich, Costas Bekas and Alessandro Curioni of IBM Zurich Research Laboratory, and Steffen Schmidt and Nikolaus Adams of Technical University Munich.
The researchers, in collaboration with the Technical University of Munich and LLNL broke some serious computational fluid dynamics ground in their simulation, which maneuvered 6.4 million threads on the IBM Sequoia system. The simulation, according to IBM, stands as the “largest simulation ever in fluid dynamics by employing 13 trillion cells and reaching an unprecedented, for flow simulations, 14.4 petaflop sustained performance on Sequoia—73% of the supercomputer’s theoretical peak.”
The bubble bursting exercises are more than just interesting to watch in action. These simulations model complex events related to clouds of collapsing bubbles, which can yield new insight in manufacturing, medicine and beyond as scientists seek to understand how they might “shatter” tumors, kidney stones or even fuel injection fluid interactions.
The researchers described their award-winning effort by pointing to how the “destructive power of cavitation reduces the lifetime of energy critical systems such as internal combustion engines and hydraulic turbines, yet it has been harnessed for water purification and kidney lithotripsy.” They go on to note that they were able to “advance by one order of magnitude the current state-of-the-art in terms of time to solution, and by two orders the geometrical complexity of the flow. The software successfully addresses the challenges that hinder the effective solution of complex flows on contemporary supercomputers, such as limited memory bandwidth, I/O bandwidth and storage capacity.”
“We were able to accomplish this using an array of pioneering hardware and software features within the IBM BlueGene/Q platform that allowed the fast development of ultra-scalable code which achieves an order of magnitude better performance than previous state-of-the-art,” said Alessandro Curioni, head of mathematical and computational sciences department at IBM Research – Zurich. “While the Top500 list will continue to generate global interest, the applications of these machines and how they are used to tackle some of the world’s most pressing human and business issues more accurately quantifies the evolution of supercomputing.”
As IBM noted, These simulations are one to two orders of magnitude faster than any previously reported flow simulation. The last major achievement was earlier this year by a team at Stanford University which broke the one million core barrier, also on Sequoia.
This year the prize committee clarified their description of what it takes to render a winner, including the following criteria:
The prize winner is not selected simply on raw performance numbers. Rather, the Prize Committee seeks:
- evidence of important algorithmic and/or implementation innovations
- clear improvement over the previous state-of-the-art
- solutions that don’t depend on one-of-a-kind architectures (systems that can only be used to address a narrow range of problems, or that can’t be replicated by others)
- performance measurements that have been characterized in terms of scalability (strong as well as weak scaling), time to solution, efficiency (in using bottleneck resources, such as memory size or bandwidth, communications bandwidth, I/O), and/or peak performance
- achievements that are generalizable, in the sense that other people can learn and benefit from the innovations
ORIGINAL: HPC Wire
Nicole Hemsoth
November 22, 2013
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.