The exploit of students revives the debate on Chinese supercomputers

The exploit of students revives the debate on Chinese supercomputers

The stunning performance of a mini-supercomputer built by Chinese students impresses, but also raises many questions.

Fugaku, the star of Japanese supercomputers, reigned supreme over this discipline for two years – an eternity in this field. It was certainly relegated to second place very recently by the implausible Frontier (see our article), but it remains an engineering marvel that remains very difficult to beat in the field of raw power. However, this is what a team of Chinese students managed to do with a machine that is immensely less powerful on paper.

According to the South China Morning Post, it all started with a project from the Huazhong University of Science and Technology. In collaboration with the technology giant Huawei, they have developed their own mini-supercomputer with the means at hand.

They tested their machine on a Single Source Shortest Path (SSSP) issue. It’s an algorithmic problem that requires tremendous raw power to solve at scale; it is therefore often used as a benchmark for comparing the performance of supercomputers.

© Weibo (via South China Morning Post)

A student project against a real titan

At first sight, therefore, one might expect potentially encouraging results, but modest at best. But against all odds, the results were absolutely stunning; on this SSSP problem, the students’ DepGraph Supernode showed up twice as fast as the famous Fugaku!

It is important to specify that this result does not mean that this machine is technically more powerful than the Fugaku – very far from it. It’s just been over-optimized to devour SSSP issues raw. But it’s still quite a feat.

And for good reason: this machine is far from the raw theoretical power of the Japanese giant. The Fugaku indeed has about 4 million cores (not counting the GPU part) while the DepGraph Supernode only has… 128!?

A number that looks like a typo. This is a number of cores that would rather correspond to a (very) high-end professional workstation than to a true elite supercomputer; how was this student project able to top a titan of discipline at the post?

© Jeremy Bezanger – Unsplash

A David with 128 hearts who cunningly defeated Goliath

The answer is hidden in the problem that the students sought to solve with this stunning proof of concept: the interdependence of hearts.

To sum up vulgarly, in a modern CPU (Central Processing Unit), there are several cores. They each represent a logical subunit; everyone can perform calculations on their own. When we pool them, we end up with a machine that can do a considerable amount of computational work.

But what happens when you want to perform complex calculations where each step depends on the previous one? In this case, the hearts are interdependent and we end up with a bottleneck; some have to wait for their colleagues to finish their work.

And according to the South China Morning Post, it was on this interdependence that the students played. They explain that they have developed a brand new architecture and several software solutions that allowed them to optimize the performance of each core byreducing the chaos caused by addiction”.

Very little information in free access…

At first glance, this looks very much like the first step in an approach that could become revolutionary. But the concrete potential of this work remains as obscure as the technical details. Indeed, the researchers do not seem to have published a scientific paper where they detail their process. And that’s a shame, because it would be very interesting to know if this approach could be applied to supercomputers that house millions of cores; intuitively, this at least very complicated.

But that doesn’t take away from the potential of this proof of concept; the implications are nevertheless numerous and very interesting. First, it’s excellent evidence that modern computing probably has immeasurable leeway; even without touching the hardware, there is probably room to multiply the performance of current systems just by optimizing the software part.

But the main question that emerges from these gray areas concerns the state of the Chinese HPC (high performance computing) sector.

© Oak Ridge National Laboratory – YouTube screenshot

… as for the rest of Chinese HPC

Almost all institutions that have a supercomputer document its performance on a platform called Top500. This platform, whose first place was recently taken by Frontier (see our article), allows external observers to easily compare them and follow the thread of technological progress.

China is also very well equipped at this level; it is the country with the largest number of machines in this ranking, with 173 supercomputers among the 500 most powerful in the world. On the other hand, some of the most remarkable are conspicuous by their absence

China is conspicuous by its absence in this ranking. Several sources have for example claimed that the country of Xi Jinping would have built the first two “exascale” supercomputers in the world as early as 2021. That is well before the arrival of the Frontier, which officially became the title holder very recently. But surprisingly, there is no no trace of these machines in the Top500, even if we would expect them in 1st or 2nd position on the basis of this information. And these are only isolated examples.

If these machines are conspicuous by their absence in the ranking, it is not because they are not up to the task technologically. It is even far from being the case. If the Chinese machines are absent from the ranking, it is quite simply that the Chinese contingent prefers to keep their performance secret.

High-performance computing and artificial intelligence are among the Chinese government’s major areas of work. © Haluk Beyazab – Flickr

Supercomputers, a leading strategic resource

The reasons for this mystery remain unclear, but there are still some tentative leads. The Chinese government has never hidden the fact that high performance computing, particularly applied to artificial intelligence, is one of its top priorities.

It is indeed a incredibly powerful tool that Xi Jinping wants to put at the service of the global influence of China; they can therefore consider the technical characteristics of these machines as strategic data that deserves to be kept secret.

For the rest of the world, it is therefore impossible to know what to expect. Is Chinese HPC on the verge of 10-exascale, ten times the standard established by Frontier, as Data Center Dynamics recently claimed? This is information that is unfortunately unverifiable. What is certain, however, is that it is necessary to take the Top500 with a grain of saltbecause real computer monsters could already be circulating discreetly.

Leave a Comment

Your email address will not be published.