Over the past few years, supercomputing, or high-performance computing (HPC), has been a strategic focus for the scientific and innovation ecosystem. Learn about this technology and its various applications.
The first supercomputer made available to the Portuguese scientific community was installed by FCCN, FCT's digital services, in 1988. Since then, over the past four decades, this area has undergone significant developments, becoming one of the European Commission's investment priorities in technology and innovation. The objective of this effort, which brings together public and private organizations, is “develop a world-class supercomputing ecosystem in Europe”, highlights the EuroHPC JU.
But what exactly does supercomputing or advanced computing consist of?
As the name implies, a supercomputer is a powerful machine capable of solving some of today's most difficult problems in real time. For this reason, they are very relevant machines in the current context, as they allow researchers to “find the answer to some of the most complex scientific questions”, explains the European Commission.
The technology at the basis of supercomputing is, of course, related to information processing, explains IBM on its websiteTraditional computers use the “sequential computing” method, dividing the workload by performing a sequence of tasks on the same processor. In the case of a supercomputer, “massively parallel computing,” in which multiple tasks are executed simultaneously, using tens of thousands of processors or cores.
For this reason, from the point of view of its architecture, a supercomputer uses clusters of nodes connected in a network, with each node performing part of the same task. On its website, Amazon Web Services explains that a cluster supercomputing “is composed of hundreds or even thousands of computing nodes”, with each node “containing between 8 and 128 CPUs [Central Processing Units]”.
The end result of this infrastructure is impressive – the world's most powerful supercomputers are capable of performing at least one hundred million trillion (10¹⁷) operations per second. And there is a technological leap expected soon. It is expected for the second half of 2024 JUPITER installation, the first European exascale supercomputer, that is, capable of performing 10¹⁸ operations per second.
This information processing capacity is very relevant in a wide variety of areas. As the European Commission recalls, can be used to “advance the frontiers of knowledge in virtually all scientific fields.” Since “control and mitigate the effects of the climate crisis” until “produce safer and more environmentally friendly vehicles”, including the “increase in cybersecurity”, there are many applications for these machines in the world of science and innovation.
Another field in which supercomputing is already an essential tool is medicine. High-performance computing is used to manufacture new drugs, test new candidate molecules, or fine-tune existing ones. A frequently shared example illustrates the power of these machines. The first attempt to sequence the human genome took 13 years. Today, a supercomputer can do it in less than a day.
Supercomputing in Portugal
Portugal stands out in the supercomputing landscape with the Deucalion, the fastest national supercomputer ever supported by the Recovery and Resilience Plan (RRP). Furthermore, Portuguese participation in the Spanish supercomputer MareNostrum 5, funded by the PRR, strengthens international collaboration in this strategic area.
For more details on access, please consult information through the euroHPC competition open, with applications until September 6th. Just like the MareNostrum 5 expression of interest, whose deadline is September 9.
Also, stay tuned for the 5th National Advanced Computing Project Competition, which will begin on September 10th. To access these technologies: more information on the FCT competitions page.
