Fork me on GitHub

ShARC specifications

Total capacity

  • Worker nodes: 121
  • CPU cores: 2024
  • Total memory: 12160 GiB
  • GPUs: 40
  • Fast network filesystem (Lustre): 669 TiB (/home and /data are shared with Iceberg)

Note that some of these resources have been purchased by research groups who have exclusive access to them.

General CPU node specifications

98 nodes are publicly available (not exclusive to research groups).

  • Machine: Dell PowerEdge C6320
  • CPUs: 2 x Intel Xeon E5-2630 v3
    • Haswell processor microarchitecture;
    • 2.40 GHz;
    • Support for AVX2 vectorisation instructions (simultaneously apply the same operation to multiple values in hardware);
    • Support for Fused Multiply-Add instructions (expedites operations involving the accummulation of products e.g. matrix multiplication).
    • Hyperthreading is disabled on all nodes bar four that are reserved for interactive jobs.
  • RAM: 64 GB (i.e. 4 GiB / core)
    • 1866 MHz;
    • DDR4.
  • Local storage: 1 TiB SATA III HDD
    • /scratch: 836 GiB of temporary storage;
    • /tmp: 16 GiB of temporary storage.

Large memory node specifications

Four nodes are publicly available (not exclusive to research groups).

These are identical to the general CPU nodes but with 256 GiB RAM (16 GiB per core).

GPU node specifications

Two nodes are publicly available (not exclusive to research groups):

  • Machine: Dell PowerEdge C4130
  • CPUs: 2 x Intel Xeon E5-2630 v3 (2.40GHz)
  • RAM: 64 GB (i.e. 4 GiB / core); 1866 MHz; DDR4
  • Local storage: 800 GiB SATA SSD
  • GPUs: 8 x NVIDIA Tesla K80
    • 24 GiB of GDDR5 memory (12 GiB per GPU; 192 GiB per node)
    • Up to 2.91 Teraflops of double precision performance with NVIDIA GPU Boost
    • Up to 8.74 Teraflops of single precision performance with NVIDIA GPU Boost

Hardware-accellerated visualisation nodes

One node is publicly available:

  • Machine: Dell Precision Rack 7910
  • CPUs: 2 x Intel Xeon E5-2630 v3 (2.40GHz)
  • RAM: 128 GiB (i.e. 8 GiB / core); 1866 MHz; DDR4
  • Local storage: 1 TiB
  • Graphics cards: 2x Quadro K4200
    • Memory: 4 GiB GDDR5 SDRAM

Networking

  • Intel OmniPath Architecture (OPA) (100 Gb/s) to all public nodes
  • Gigabit Ethernet

Operating System and software

  • OS: Centos 7.x (binary compatible with RedHat Enterprise Linux 7.x) on all nodes
  • Interactive and batch job scheduling software: Son of Grid Engine
  • Many applications, compilers, libraries and parallel processing tools. See Software on ShARC

Non-worker nodes

  • Two login nodes (for resilience)
  • Other nodes to provide:
    • Lustre parallel filesystem
    • Son of Grid Engine scheduler ‘head’ nodes
    • Directory services