For what purpose
High-performance computing is concerned with the use of parallel programs on compute clusters. A cluster is a set of computers that are connected together and configured so as to appear as a single, large, machine. It is used for large-scale scientific computations.
A typical desktop workstation is able to perform several dozens billion floating-point arithmetic operations per seconds, while a typical cluster as we are discussing here can do 200 to 300 times more operations per seconds. That compute power lies in the large number of CPUs present in the cluster, but also in the large memories (10 times to 20 times the memory of a typical workstation) and very fast interconnect (100 to 1000 times the office connection.) While the computing power is enormous, not every program/application can benefit of it ; only parallel programs, that use several CPUs at the same time, can run fast on clusters. Serial programs, that only use one CPU at a time, will not benefit from clusters, and might even run slower on a cluster than on a recent high-end laptop.
On the university campus, we operate three clusters :
- Lemaitre3, dedicated to large parallel jobs (HPC), with a fast interconnect and a fast storage system;
- Manneback, dedicated to High-Throughput Computing (HTC), suited for running a very large number of small jobs;
- Hmem, adapted to problems that require large amounts of memory.
As we are part of CÉCI, the "Consortium des Équipements de Calcul Intensif", any user who is granted access to our compute clusters automatically also has access to all the CÉCI clusters. Read more on the CÉCI clusters...Members of UCL (intranet) can see the graphs of the load of the three clusters for the last 30 days and the cpu-hours spent by all UCL groups (poles).
Access to the compute clusters is offered to any UCLouvain-affiliated researcher or student whose needs in computations grow beyond what a single workstation can offer, be it for numerical simulation, optimization, number crunching, etc. Our biggest users include for instance NAPS, for ab initio computations of chemical interactions, and TFL for computer simulations of physical models. Read more on how to request access...
Once the user is granted access, he or she is given a private SSH key that allows him/her connecting to the cluster of his/her choice, and transferring data back and forth. Read more on how to connect to the clusters...
Usage is mostly free, except for specific circumstances. Read more about the costs....
Note about the Manneback cluster
Hmem and Lemaitre3 are clusters managed by CISM for UCL users, but also for CÉCI users. Information about them is read on the CÉCI web site.
Who is Manneback?
Charles Manneback (1894-1975) was Professor of Physics at UCL and was a close friend of Georges Lemaitre. In 1947 he was the head of the mission who went to the USA to study computing machines with the aim of bringing back the knowledge needed to install the first supercomputer in Belgium. He naturally became the chairman of the Committee for the Promotion and Study of Electronic Mathematical Machines, which led to the creation of the "IRSIA-FNRS Machine", designed and realized by Bell Telephone Manufacturing Company (BTMC) in Antwerp. This machine was one of the first in the world to be designed for floating point calculation and was installed soon after 1950.
Manneback is a heterogeneous cluster gathering machines or varying ages, often with 4 GB RAM per core, and a local scratch. Several CPU generations co-exist, from Clovertown to Skylake Gold. The cluster nodes has access to 30 TB scratch filesystem.
Large number of small jobs, or SMP jobs that use only one compute node at a time, 5 days max.
- Home directory (50GB quota per user)
- Working directory /globalscratch ($GLOBALSCRATCH)
- Default queue* (5 days
- Reserved queues* cp3, zoe
- Generic resource*: gpu, phi
manneback.cism.ucl.ac.be (port 22) with you CÉCI login and id_rsa.ceci file.
Server SSH key fingerprint: (What's this?)