Programming Environment (Building Software)

C, C++, and Fortran: Traditional HPC

The DeltaAI programming environment is provided by HPE/Cray as the Cray User Environment . The Cray Compiler Environment describes the compilers: cc, CC, and ftn for building C, C++, and Fortran codes (serial, OpenMP, and/or MPI). Use the Cray compilers and the PrgEnv-<compiler_family> of your choice. There are two tested/functional programming environments (modules) on DeltaAI: PrgEnv-gnu and PrgEnv-cray. PrgEnv-gnu is loaded by default. PrgEnv-nvidia and PrgEnv-nvhpc do not work at this time.

typical flags

PrgEnv-gnu (or cray)

openmp

MPI or serial

cc, CC, or ftn

-fopenmp

no special flags

Compiler Recommendations

The NVIDIA Grace Hopper Tuning Guide has a section on Compilers and recommendations on flags and compiler vendors.

CUDA

The nvcc compiler for CUDA is available via the cudatoolkit module (in the default module set).

arnoldg@gh002:~> which nvcc
/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/cuda/12.2/bin/nvcc

Python

Note

More information about Python on DeltaAI is in Software - Python.

You always have your choice of python version, since it is easy to install yourself via a variety of methods (conda-forge, miniforge, and so on). There are basic python modules on DeltaAI provided by HPE/Cray: cray-python; these include mpi4py and numpy. There are also conda-forge builds with more software in them such as python/miniforge3_pytorch (also including mpi4py). For the latest information on DeltaAI python modules search using module spider for python and anaconda:

arnoldg@gh-login03:~> module spider python
----------------------------------------------------------------------------
  cray-python:
----------------------------------------------------------------------------
     Versions:
        cray-python/3.11.5
        cray-python/3.11.7

arnoldg@gh-login03:~> module spider miniforge

---------------------------------------------------------------------------------------------------------------------------------------
  python/miniforge3_pytorch: python/miniforge3_pytorch/2.5.0
---------------------------------------------------------------------------------------------------------------------------------------

Note: to build your own mpi4py, follow this recipe (MPICC=”cc -shared” is required):

MPICC="cc -shared" pip install mpi4py

Legacy gpudirect for mpi+cuda

PrgEnv-gnu

To build legacy cuda codes with gpudirect support, unload the gcc-native/13 compiler–replacing with gcc-native/12, load cudatoolkit and craype-accel-nvidia90, and set MPICH_GPU_SUPPORT_ENABLED=1. After building code (with cc, CC, ftn), verify that the libmpi_gtl_cuda library is part of the application. This library is required for gpudirect support. The module and environment settings should match both compile/link and runtime. See the example below.

module load craype-accel-nvidia90
module unload gcc-native
module load gcc-native/12
export MPICH_GPU_SUPPORT_ENABLED=1

arnoldg@gh002:~/osu-micro-benchmarks-7.4/c/mpi/collective/blocking> ldd osu_reduce | grep gtl
      libmpi_gtl_cuda.so.0 => /opt/cray/pe/lib64/libmpi_gtl_cuda.so.0 (0x0000ffffa6a60000)
arnoldg@gh002:~/osu-micro-benchmarks-7.4/c/mpi/collective/blocking> srun osu_reduce -d cuda --validation -m131072:131072

# OSU MPI-CUDA Reduce Latency Test v7.4
# Datatype: MPI_CHAR.
# Size       Avg Latency(us)        Validation
131072                254.03              Pass

Visual Studio Code

Warning

These VS Code pages are under construction.

The following pages provide step-by-step instructions on how to use VS Code, in different configurations, on DeltaAI.