Template repository for Ray workflows on HLRS HPC Systems
Go to file
Kerem Kayabay 5a7702ec67 update the single node interactive mode according to the new conda workflow 2024-04-25 09:09:18 +02:00
deployment_scripts initial modifications to use the conda env builder repo. 2024-03-25 14:52:40 +01:00
notebooks ready to test the workflow on Hawk 2024-01-05 13:22:52 +01:00
src prepare for multi node cluster 2024-01-05 16:08:04 +01:00
.gitignore ready to test the workflow on Hawk 2024-01-05 13:22:52 +01:00
README.md update the single node interactive mode according to the new conda workflow 2024-04-25 09:09:18 +02:00

README.md

Ray: How to launch a Ray Cluster on Hawk?

This guide shows you how to launch a Ray cluster on HLRS' Hawk system.

Table of Contents

Getting Started

Step 1. Build and transfer the Conda environment to Hawk:

Only the main and r channels are available using the Conda module on the clusters. To use custom packages, we need to move the local Conda environment to Hawk.

Follow the instructions in the Conda environment builder repository, which includes a YAML file for building a test environment to run Ray workflows.

Step 2. Allocate workspace on Hawk:

Proceed to the next step if you have already configured your workspace. Use the following command to create a workspace on the high-performance filesystem, which will expire in 10 days. For more information, such as how to enable reminder emails, refer to the workspace mechanism guide.

ws_allocate hpda_project 10
ws_find hpda_project # find the path to workspace, which is the destination directory in the next step

Step 2. Clone the repository on Hawk to use the deployment scripts and project structure:

cd <workspace_directory>
git clone <repository_url>

Launch a local Ray Cluster in Interactive Mode

Using a single node interactively provides opportunities for faster code debugging.

Step 1. On the Hawk login node, start an interactive job using:

qsub -I -l select=1:node_type=rome -l walltime=01:00:00

Step 2. Activate the Conda environment:

# Load the Conda module
module load bigdata/conda
source activate # activates the base environment

# List available Conda environments for verification purposes
conda env list

# Activate a specific Conda environment.
conda activate ray_environment # you need to execute `source activate` first, or use `source [ENV_PATH]/bin/activate`

Step 3. Initialize the Ray cluster.

You can use a Python interpreter to start a local Ray cluster:

import ray

ray.init()

Step 4. Connect to the dashboard.

Warning: Do not change the default dashboard host 127.0.0.1 to keep Ray cluster reachable by only you.

Note: We recommend using a dedicated Firefox profile for accessing web-based services on HLRS Compute Platforms. If you haven't created a profile, check out our guide.

You need the job id and the hostname for your current job. You can obtain this information on the login node using:

qstat -anw # get the job id and the hostname

Then, on your local computer,

export PBS_JOBID=<job-id> # e.g., 2316419.hawk-pbs5
ssh <compute-host> # e.g., r38c3t8n3

Check your SSH config in the first step if this doesn't work.

Then, launch Firefox web browser using the configured profile. Open localhost:8265 to access the Ray dashboard.

Launch a Ray Cluster in Batch Mode

Let us estimate the value of π as an example application.

Step 1. Add execution permissions to start-ray-worker.sh

cd deployment_scripts
chmod +x start-ray-worker.sh

Step 2. Submit a job to launch the head and worker nodes.

You must modify the following lines in submit-ray-job.sh:

  • Line 3 changes the cluster size. The default configuration launches a 3 node cluster.
  • export WS_DIR=<workspace_dir> - set the correct workspace directory.
  • export PROJECT_DIR=$WS_DIR/<project_name> - set the correct project directory.

Note: The job script src/monte-carlo-pi.py waits for all nodes in the Ray cluster to become available. Preserve this pattern in your Python code while using a multiple node Ray cluster.

Launch the job and monitor the progress. As the job starts, its status (S) shifts from Q (Queued) to R (Running). Upon completion, the job will no longer appear in the qstat -a display.

qsub submit-ray-job.pbs
qstat -anw # Q: Queued, R: Running, E: Ending
ls -l # list files after the job finishes
cat ray-job.o... # inspect the output file
cat ray-job.e... # inspect the error file

If you need to delete the job, use qdel <job-id>. If this doesn't work, use the -W force option: qdel -W force <job-id>