In 2016, the University has built a new data center dedicated to scientific computing.
The CISM equipment was, before that, dispatched into two computer rooms shared with the administrative IT services (SGSI); one, named 'Aquarium', in the Pythagore building, and another, named 'Tier-2' in the Marc de Hemptine building. This later room was also shared with CP3, who hosted there all their storage and compute nodes.
Now, all CISM and CP3 computers have moved to the new data center, a picture of which is shown below. It has a cooling capacity of 200kW, which is equivalent to 800 to 1000 computers.
Inside the main room, the computers are hosted in racks that are organised in two rows that are placed back to back. The aisle between the two rows in confined so that all the hot air blown by the computers is trapped into the closed space.
An additional row contains the technical racks: one rack hosting the main network switches of the room and the other hosting the Uninterruptible Power Supply (UPS) and its batteries, as shown on the picture below.
Even though the picture only depicts 12 racks, the room is now equipped with 20 racks.
Below is an view of the temperature in the room as measured by all the sensors.
In the above picture, the color is representative of the air temperature. We can see the host air trapped in the confined aisle. The hot air is then sucked in by the inrow cooling systems and blown at a lower temperature in the room
The inrow cooling systems use cold water that comes from the basement where we can find large pumps and buffer tanks.
The pumps push the water that was heated by the hot air of the computer onto the roof where two systems can cool it down ; an air cooling system that uses the ambient air when the temperature is below 12°C, and two chillers (large refrigerators) that are used when the outside air temperature is too warm.
Once the data center was fully operational, we used artificial heating devices to simulated very heavy computer load to test the functioning of the power and cooling devices to make sure the room was working correctly.
In 2016, an engineering student named Vincent Flon chose the datacenter as topic for his master thesis and performed CFD simulations of the room, using GMSH and OpenFOAM. He was able to compare the real-life situation when the heating devices were used with a computer simulation depicted in the following figure.
One important aspect of a data center is its Power Usage Efficiency (PUE) which is the ratio between the total power consumption of the building and the power consumption of the computers alone. The PUE of this data center is currently around 1.3, which is very rather good given the fact that the infrastructure was designed for 400kW while the currently installed power is more around 100kW. The 400kW will be attained progressively as older computers are replaced with more powerful/dense models.
Members of UCL (intranet) can see the Data Center III (DCIII) PUE and Power Consumption Graphs
Visits are organized upon request.