__pycache__ | ||
deployment_scripts | ||
notebooks | ||
src | ||
.gitignore | ||
README.md |
Dask: How to execute python workloads using a Dask cluster on Vulcan
Motivation: This document aims to show users how to launch a Dask cluster in our compute platforms and perform a simple workload using it.
Structure:
To do:
- Made scripts for environment creation and deployment in the folder
local_scripts
- Changed scripts to
deployment_scripts
- Added step about sending python file
This repository looks at a deployment of a Dask cluster on Vulcan, and executing your programs using this cluster.
Table of Contents
Prerequisites
Before running the application, make sure you have the following prerequisites installed in a conda environment:
- Conda Installation: Ensure that Conda is installed on your local system. For more information on, look at the documentation for Conda on HLRS HPC systems.
- Dask: Install Dask using conda.
- Conda Pack: Conda pack is used to package the Conda environment into a single tarball. This is used to transfer the environment to Vulcan.
Getting Started
- Build and transfer the Conda environment to Hawk:
Only the main
and r
channels are available using the Conda module on the clusters. To use custom packages, we need to move the local Conda environment to Hawk.
Follow the instructions in the Conda environment builder repository.
- Allocate workspace on Hawk:
Proceed to the next step if you have already configured your workspace. Use the following command to create a workspace on the high-performance filesystem, which will expire in 10 days. For more information, such as how to enable reminder emails, refer to the workspace mechanism guide.
ws_allocate dask_workspace 10
ws_find dask_workspace # find the path to workspace, which is the destination directory in the next step
- Clone the repository on Hawk to use the deployment scripts and project structure:
cd <workspace_directory>
git clone <repository_url>
- Send all the code to the appropriate directory on Vulcan using
scp
:
scp <your_script>.py <destination_host>:<destination_directory>
- SSH into Vulcan and start a job interactively using:
qsub -I -N DaskJob -l select=1:node_type=clx-21 -l walltime=02:00:00
Note: For multiple nodes, it is recommended to write a .pbs
script and submit it using qsub
.
- Go into the directory with all code:
cd <destination_directory>
- Initialize the Dask cluster:
source deploy-dask.sh "$(pwd)"
Note: At the moment, the deployment is verbose, and there is no implementation to silence the logs.
Note: Make sure all permissions are set using chmod +x
for all scripts.
Usage
To run the application interactively, execute the following command after all the cluster's nodes are up and running:
python
Or to run a full script:
python <your-script>.py
Note: If you don't see your environment in the python interpretor, then manually activate it using:
conda activate <your-env>
Do this before using the python interpretor.
Notes
Note: Dask Cluster is set to verbose, add the following to your code while connecting to the Dask cluster:
client = Client(..., silence_logs='error')
Note: Replace all filenames within <>
with the actual values applicable to your project.