System Architecture

Tape Library

Granite is made up of a single Spectra TFinity library. This 19-frame library is capable of holding over 300PB of replicated data, leveraging 20 LTO-9 tape drives to transfer data to thousands of LTO-9 (18TB) tapes.

The ScoutAM software allows section sizing media into chunks to mitigate writing small files to tape. While this helps the library handle smaller files, large files are strongly recommended. Quotas enforce a large average file size 100MB.

Granite Servers

Five Dell Poweredge servers form Granite’s server infrastructure.

Each node connects via direct Fibre Channel (FC) connections to four tape interfaces on the tape library and connects to the disk cache via 100Gb Infiniband. Each node is also connected at 2 x 100GbE to the storage ethernet aggregation which allows 2 x 100GbE to the NPCF core network.

Disk Cache

The archive disk cache is where all data lands to be ingested or extracted from the archive. This is currently made up of a DDN SFA 14KX unit with a mix of SAS SSDs (metadata) and SAS HDDs (capacity), with an aggregated capacity of ~2PB.

Cluster Export Nodes

The tape archive is mounted via NFS onto the Globus export nodes directly, so they have direct access to the archive. Granite shares its export nodes with Taiga; this allows for quicker and more direct Globus transfers between Taiga and Granite.

System Architecture Diagram

Granite architecture diagram showing the tape library connected to the Granite servers. The Granite servers are also connected to the disk cache and the storage ethernet aggregate. The Storage ethernet aggregate is also connected to the cluster export notes, NPCF core, and NCSA center-wide filesystem.