Let me introduce you to a good friend of many computational scientists from all over the world – the High-Performance Computing (HPC) system or simply supercomputer. It is not picky, but you have to explain a problem properly, design an algorithm to solve it and be patient while waiting for the result. The HPCs can understand all modern [programming] languages. They are used for scientific simulations, weather forecasts, data analysis and many other things.
In the era when a portable computer (PC) is affordable and used on a daily basis, one can hardly imagine the limitations of the old machines. I remember attending one of the first classes about the architecture of PCs. Our professor started with the historical overview and, in particular, with the so-called punched cards. These cards were widely used to create and store programs in order to launch them on IBM machines. One card was encoding one line of the program. Thus, launching even a simple 100 lines program was quite challenging. Not to mention that computers at that time had less than 1 kilobyte of memory. But what impressed me a lot is that programmers had to reserve not only a certain date but hours to run their code. A large number of users led to the fact, that waiting period could be one week or more. Thus, it required a good understanding of both the programming paradigm and the computer architecture in order to properly estimate memory usage and runtime of your program.
A few examples
So what? Nowadays we have supercomputers with millions of cores and thousands of terabytes of memory. The Top 500 list of the modern HPCs can be found here (as of November 2018). Take a look at the Summit HPC, which is located at the Oak Ridge National Laboratory in the USA.

Impressive, isn’t it? It has almost 2.4 million central processing units (CPUs or cores) in total. For example, Sony PlayStation 4 has only 8 cores.
EOS machine (see photo below, located at the Toulouse HPC centre called CALMIP) was ranked 183 in the TOP-500 list when it was launched in June 2014. It used to share calculations with Météo-France. Last year more than 600 researchers consumed 74 million hours of computational time for their simulations!

A brand new HPC called Olympe will be launched at CALMIP next year. It will have 13464 state-of-the-art cores and I am looking forward to running calculations on it!
I will not go into details of HPC cooling systems even though it remains a major issue nowadays. It is a very technical problem, but the take-home message is that high temperature is the sworn enemy of the computational performance. The most common cooling solution is air cooling, which is basically an enormous air conditioner that consumes a lot of energy.
If you are concerned about ecology and green technologies then take a look at The Green 500 list. A good example is Tsubame-KFC machine (top 1 of the green list in June 2015) from RIKEN, Japan. It had an extremely high processor/CPU density, which were kept in non-toxic, low viscosity oil with high 260 Celsius flash point. The latter solution made it possible to eliminate water usage. For more technical information see this article.

HPCs themselves are very expensive. But there is also a price of the computation itself, which is based on the amount of consumed time/energy/cores. For instance, one hour of CPU calculation time on a French national supercomputer is about 0.044€ as reported in March 2017 here.
(Super)computer at home
All HPCs discussed above are quite expensive, heavy and not really easy to maintain. But do you have a PC or a laptop with NVIDIA or AMD video card? A video card or a graphics processing unit (GPU) is mainly used to process and visualize graphics (e.g. for computer games or video rendering). Modern GPUs have up to several thousands cores, perform operations in parallel and simultaneously and usually weight less than 1 kg! The GPU cores do not “talk” to each other as CPUs do to exchange data. This is the main difference between conventional multi-core supercomputer and GPU. So, would you believe me, if I say that the video card in your PC/laptop is a supercomputer itself?
A good example of how GPUs process information can be found in this awesome episode of MythBusters:
Overall
It is hard to imagine a computational scientist like me without an access to supercomputer. Researchers always mention the corresponding computational facilities in the Acknowledgements section of their articles. For example, all my numerical simulations so far have been performed on the EOS machine (see the A few examples section above) and on a local computational cluster in my host lab (LCAR). It is worth mentioning that HPCs from the TOP-500 list run Linux operating system.
I would like to conclude with the following performance comparison published by NVIDIA, which demonstrates that high-performance calculations in the nearest future will more likely depend on the development of GPUs. Almost all machines from the TOP-500 list have some number of GPUs nowadays.

If you have any questions or if you would like to hear more on this topic – feel free to write a comment!
This post was written by Evgeny Posenitskiy, who was pursuing his PhD at the Université Paul Sabatier in Toulouse.