A closer look at Harvard and Google's HPC heart research project

That's a massive workload you've got there – how much does it cost?

Google is working with Harvard University on a medical research program using public cloud resources rather than a supercomputer to run very large scale simulations..

The research in question involves simulations for a novel approach to unclogging arteries in tackling heart disease. The complexity of the simulations and the amount of data they requires tons of compute. So much, in fact, that only a few supercomputers in the world would be suitable.

Getting access to these supers for large-scale research can be difficult, but the program will highlight how cloud resources can be used instead.

"Today, the level of computing power required to run simulations of this complexity and at this scale is available from only a small handful of supercomputers around the world," Professor Petros Koumoutsakos said in a statement.

Koumoutsakos is research lead at Harvard John A Paulson School of Engineering and Applied Sciences (SEAS), and is said to have pioneered the concept of creating digital twins for therapeutic devices in circulatory diseases.

In this case, the devices are magnetically controlled artificial flagella, the whip-like tails of bacteria, and the research concerns the use of these to attack and dissolve blood clots and circulating tumor cells in human blood vessels.

The simulations involved in this research involve an accurate representation of complex blood vessels, blood flow, and the actions of the magnetically controlled artificial flagella, hence the need for enormous amounts of compute power as well as significant technical expertise, which is where Google comes in. Google is also jointly sponsoring the research, along with investment outfit Citadel Securities.

According to Koumoutsako, the program aims to demonstrate that public cloud resources can be harnessed to handle large-scale, high-fidelity simulations for medical applications.

"In doing so, we hope to show that easy access to massively available cloud computing resources can significantly reduce time to solution, improve testing capabilities and reduce research costs for some of humanity's most pressing problems."

But it's not as if high performance compute (HPC) workloads aren't already routinely operated in public cloud environments. Our colleagues over at The Next Platform published a short book "The State of HPC Cloud" back in 2016, for example. So what is different about this?

In response to queries from The Register, Google Cloud Chief Technologist for HPC Bill Magro said: "This research is the beginning of a journey to push the scale of cloud-based HPC to the extreme in order to solve some of the world's most complex calculations and advance heart disease research. At this stage, it's critical for us to validate the scientific approach, the technology infrastructure and the scalability of the overall approach."

According to Google, the demonstrations so far have used Google Cloud's workload-optimized Compute instances, which it said are configured to support the needs of multi-node, tightly coupled HPC workloads.

The simulations were run on clusters of virtual machines, deployed using Google Cloud's HPC Toolkit, but do not appear to have used any kind of hardware accelerators such as GPUs, which are commonly used to speed complex calculations in HPC applications.

Despite this, Prof Koumoutsakos and his team claim that initial tests have demonstrated the ability to achieve 80 percent of the performance available in dedicated supercomputer facilities using extensively tuned code. These tests are understood to have used two types of HPC codes: one that is said to have been a finalist for the Gordon Bell prize in 2015, and the open-source LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator), used for particle simulations.

(One of the finalists for the Gordon Bell prize in 2015 was "Massively Parallel Models of the Human Circulatory System," with research led by Amanda Randles of Duke University and a team of collaborators from Lawrence Livermore National Laboratory and IBM.)

Omdia chief analyst Roy Illsley told us Harvard's claim of 80 percent of the effectiveness of a dedicated supercomputer is realistic, but it would depend on the applications in question.

This is because HPC is not a single application, but a whole range of workloads that place differing demands on the capabilities of the compute infrastructure.

"Let's say they are correct, that for those applications (which is the key point here) the cloud was almost as good as a HPC. The issue as I see it is: are these applications typical, or has Google done what it did with the quantum computer announcement it made a few years ago, picked something it knows it is good at, while most of the other use cases are unproven?"

But the big question is whether running such massive simulation workloads in the cloud is more cost effective than simply booking time on a supercomputer or building an on-premise HPC cluster for it. Neither Google nor Harvard were prepared to disclose costs, however, which suggests to us that it isn't cheap. ®

More about

TIP US OFF

Send us news


Other stories you might like