Hello!
We’re excited to announce that the new HPC nodes are finally ready for use! There are 148 nodes, each with 20 cores and 128GB of memory, totaling 2960 cores and 18TB of memory. These nodes have been added to a new partition called “univ2” and have also been added to the “pre” partition.
We’ve run into some complications setting up the new InfiniBand network, and as a result these new nodes do not have outbound network connectivity which means they are only able to communicate with other SLURM nodes, the Gluster filesystem, aci-service-1, and aci-service-2. This prevents these new nodes from running certain jobs like COMSOL and MATLAB jobs as they cannot yet contact the license server(s). In order to finish setting up the new InfiniBand network we’ll need to take the cluster down once more. We do not have a date set for this downtime, but we’ll let you know at least 2 weeks in advance.
If you run into problems or have questions please let us know.
Neil Van Lysel
Systems Administrator
Center for High Throughput Computing
chtc@xxxxxxxxxxx
|