Access Methods

Native Lustre Mount

Taiga is available via a native Lustre mount on the following systems:

  • Delta

  • DeltaAI

  • Illinois Campus Cluster (coming soon)

  • Radiant

  • NCSA Industry Systems

  • HAL

  • ISL Cluster

Native subdirectory mounts can also be requested for one-off machines via a support request. One-off mounts of Taiga will only be allowed for machines that have gone through the NCSA security hardening process. The storage enabling technology (SET) team is developing a streamlined guide for Lustre client installation and configuration (coming soon!).

Globus

Globus is a web-based file transfer system that works in the background to move files between systems with Globus endpoints. Go to Transferring Files - Globus for instructions on using Globus with NCSA computing resources.

The Taiga endpoint collection name is “NCSA Taiga”.

NFS

Note

Native mounts of Taiga using the Lustre client are preferred due to superior performance and increased stability.

Subdirectories of Taiga can be mounted via NFS, when necessary. The NFS service is accessed via the taiga-nfs.ncsa.illinois.edu high availability (HA) endpoint. The NFS endpoint consists of 4 servers that are directly connected to the 100GbE public-facing storage network and, via redundant links, to Taiga’s HDR InfiniBand core fabric.

If you need an NFS export, submit a support request and the storage team will assist in getting the export provisioned.

S3

Whole allocations of specific sub-directories of your allocation can be exposed via S3. Taiga leverage’s the Versity S3 Gateway to provide this functionality. Scale testing and hardening of the S3 bucket deployments is underway and is expected to be available for use around July 1, 2024.

To request an S3 endpoint for all (or part) of your allocation area, please submit a support request.

Allocation Access Guidelines

Access to data on Taiga is ultimately governed by normal POSIX permissions and more specifically, group membership. The following are some common scenarios:

Direct Investment

If you or your team has a direct investment on Taiga that is separate from a compute allocation, access to your area is governed by an NCSA LDAP or Campus AD group.

Your group’s principal investigator (PI), and any others they have designated, you can add people to the NCSA LDAP group via NCSA’s Identity Portal. Once you are a member of the LDAP group, you can access the data from any system that mounts Taiga, across the center, with the same path.

Principle Investigators and Tech Reps of Illinois Campus Cluster allocations can add users to their allocation groups via the “Manage Users” section of the investor portal.

HPC System Project Directory Access

Many NCSA compute environments leverage Taiga for their project space. Accessing data that is within a compute environment project area requires membership in the compute allocation (Delta, HAL, and so on). The compute allocation PI can add you to the LDAP group that grants you access to this data.

Once you are in the compute allocation LDAP group, you should be able to access the data from:

  • All NCSA compute environments you have access to via the full path.

  • The compute environment the data originates from via /projects/.

For example, the Delta /projects mount maps to the full path /taiga/nsf/delta/ and a Delta compute allocation LDAP group may look like delta_XXXX.

Radiant Allocation

The full path to Radiant allocations on Taiga is /taiga/ncsa/radiant/. These allocations are, by default, setup with group ownership of “taiga_prj_rad_bXXX” which will match your radiant allocation ID. Permissions on this area are enforced to 770 at the top level to ensure data privacy. If you need access to this data outside of Radiant, submit a support request so we can add PIs as maintainers of the “taiga_prj_rad_bXXX group.

It is recommended that you leverage NCSA LDAP within your Radiant VMs to allign data permissions with users proper UIDs and GIDs whenever possible.

Virtual Machines for Data Services

Some groups may want/need to have virtual machines (VMs) that can directly access their data on Taiga for things like:

  • Data serving

  • Data portals

  • Light data analysis/indexing

For these use cases, groups should get an appropriately sized Radiant allocation to operate these services. Radiant is a separate service but is a scalable way to give groups access to VM infrastructure that can access their data.