This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Quickstart

Get started with the Delft AI Cluster (DAIC)

Follow these steps to start using DAIC. Complete the prerequisites first, then work through the getting started guides.


Need help? Join DAIC Mattermost for support and announcements.

1 - First Login

Log in to DAIC over SSH and verify your account is ready.

Before you begin

You need:

  • a TU Delft NetID with access to the environment
  • access to the TU Delft network, either on campus or through eduVPN

Login via SSH

  1. Open your terminal and run the following SSH command:
$ ssh <YourNetID>@daic01.hpc.tudelft.nl
  1. You will be prompted for your password. On first login, DAIC may also set up parts of your home environment:
The HPC cluster is restricted to authorized users only.

<YourNetID>@daic01.hpc.tudelft.nl's password:
Authorized users only. All activity may be monitored and reported.

Welcome to this HPC cluster.
Since this is the first time you logon, we will help you setup your environment.

***************************************************************************
This script will create symbolic links to other personal storage locations.
***************************************************************************

Setting /home/<YourNetID>/linuxhome to /tudelft.net/staff-homes-linux/a/<YourNetID>

Verify login worked

Run the following commands:

$ hostname
daic01.hpc.tudelft.nl
$ pwd
/home/<YourNetID>
$ whoami
<YourNetID>

If these commands return the expected values, your login was successful.

Troubleshooting

  • If you get Permission denied errors:
    • check you are within the TU Delft network (directly or via eduVPN),
    • check that you are using the correct NetID.
    • If both are correct and login still fails, your account may not be active yet.
  • If login still fails, contact support.

Next steps

  1. Passwordless SSH - Connect without a password
  2. Shell Setup - Configure your environment
  3. First Job - Submit a batch job

2 - Passwordless SSH

Set up SSH keys to connect without a password.

Generate an SSH key (on your local machine)

$ ssh-keygen -t ed25519 -C "your.email@tudelft.nl"

Press Enter to accept defaults. Optionally set a passphrase.

Copy your public key to DAIC

$ ssh-copy-id <YourNetID>@daic01.hpc.tudelft.nl

Test passwordless login

$ ssh <YourNetID>@daic01.hpc.tudelft.nl

You should connect without entering a password.

Kerberos for network storage

After connecting with SSH keys, run:

$ kinit
Password for <YourNetID>@TUDELFT.NET:

Verify your ticket:

$ klist
Ticket cache: KCM:656519
Default principal: <YourNetID>@TUDELFT.NET

Valid starting     Expires            Service principal
03/23/26 11:05:12  03/23/26 21:05:12  krbtgt/TUDELFT.NET@TUDELFT.NET

SSH config shortcut

Add to ~/.ssh/config on your local machine:

Host daic
    HostName daic01.hpc.tudelft.nl
    User <YourNetID>

Then connect with just:

$ ssh daic

Next steps

3 - Shell Setup

Configure your shell environment on DAIC.

Redirect caches and tools to linuxhome to avoid filling your 5MB home quota:

$ wget https://gitlab.ewi.tudelft.nl/daic/docs-experimental/-/raw/main/scripts/.daicrc -O ~/.daicrc
$ echo 'source ~/.daicrc' >> ~/.bashrc
$ source ~/.bashrc

This configures storage paths for:

  • UV, Pixi, Conda, Mamba, Micromamba
  • Apptainer, pip, npm
  • Hugging Face, Jupyter
  • XDG directories

Shell aliases (optional)

For useful SLURM aliases and shell styling:

$ wget https://gitlab.ewi.tudelft.nl/reit/shell-config/-/raw/main/.bashrc-reit -O ~/.bashrc-reit
$ echo 'source ~/.bashrc-reit' >> ~/.bashrc
$ source ~/.bashrc

Test the installation:

$ reit
Hi, from the Research Engineering and Infrastructure Team!!!

See shell-config for details.

What .daicrc configures

The .daicrc file sets these environment variables:

VariablePath
XDG_CACHE_HOME~/linuxhome/.cache
UV_CACHE_DIR~/linuxhome/.cache/uv
PIXI_HOME~/linuxhome/.pixi
CONDA_PKGS_DIRS~/linuxhome/.conda/pkgs
APPTAINER_CACHEDIR~/linuxhome/.cache/apptainer
PIP_CACHE_DIR~/linuxhome/.cache/pip
HF_HOME~/linuxhome/.cache/huggingface

Next steps

4 - First DAIC Job

Submit your first job to DAIC with SLURM and check the results.

Before you begin

You should already be logged in to DAIC. If not, complete First Login.

Submit a simple job

1. Create a working directory

$ mkdir -p ~/my-first-job && cd ~/my-first-job

2. Create a Python script

hello.py

import socket
print(f"Hello from {socket.gethostname()}")

3. Create a SLURM batch script

submit.sh

#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --time=0:05:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=1G
#SBATCH --output=slurm_%j.out

srun python hello.py

4. Submit the job

$ sbatch submit.sh
Submitted batch job 294

5. Monitor the job

$ squeue -u $USER
  JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
    294       all submit.s  netid01  R       0:02      1 gpu23

6. Check the output

Once completed, view the output file:

$ cat slurm_294.out
Hello from gpu23.ethernet.tudhpc

Request a GPU

To run GPU jobs, add the --gres flag to your batch script:

gpu_submit.sh

#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --time=0:30:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G
#SBATCH --gres=gpu:1
#SBATCH --output=slurm_%j.out

srun nvidia-smi
srun python my_gpu_script.py

Next steps

5 - Python Setup

Set up Python environments on DAIC.

Load pre-installed Python packages:

$ module load 2025/gpu
$ module load py-torch/2.5.1
$ module load py-numpy/1.26.4

$ python -c "import torch; print(torch.__version__)"
2.5.1

UV is a fast Python package manager.

Install UV

$ curl -LsSf https://astral.sh/uv/install.sh | sh

Create a project

$ cd /tudelft.net/staff-umbrella/<project>
$ uv init myproject
$ cd myproject
$ uv add torch numpy pandas

Run scripts

$ uv run python train.py

Install CLI tools

Install Python tools globally (ruff, black, jupyter, etc.):

$ uv tool install ruff
$ uv tool install jupyter

$ ruff --version
$ jupyter lab

List installed tools:

$ uv tool list

Use in batch scripts

#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --gres=gpu:1

module purge
module load 2025/gpu cuda/12.9

cd /tudelft.net/staff-umbrella/<project>/myproject
srun uv run python train.py

Option 3: Pixi (conda alternative)

Pixi is a fast conda-compatible package manager.

Install Pixi

$ curl -fsSL https://pixi.sh/install.sh | sh

Create a project

$ cd /tudelft.net/staff-umbrella/<project>
$ pixi init myproject
$ cd myproject
$ pixi add python pytorch numpy

Run commands

$ pixi run python train.py
$ pixi shell  # activate environment

Use in batch scripts

#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --gres=gpu:1

module purge
module load 2025/gpu cuda/12.9

cd /tudelft.net/staff-umbrella/<project>/myproject
srun pixi run python train.py

Option 4: Virtual environment

$ module load 2025/gpu
$ python -m venv /tudelft.net/staff-umbrella/<project>/venvs/myenv --system-site-packages

$ source /tudelft.net/staff-umbrella/<project>/venvs/myenv/bin/activate
$ pip install transformers

Option 5: Micromamba (global conda environments)

Micromamba is a lightweight conda implementation. Use it when you need traditional conda environments shared across projects.

Install Micromamba

$ "${SHELL}" <(curl -L micro.mamba.pm/install.sh)

Follow the prompts. When asked for install location, use your project storage:

/tudelft.net/staff-umbrella/<project>/micromamba

Create and use environments

$ micromamba create -n myenv python=3.11 pytorch numpy -c conda-forge -c pytorch
$ micromamba activate myenv

$ python -c "import torch; print(torch.__version__)"

Use in batch scripts

#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --gres=gpu:1

module purge
module load 2025/gpu cuda/12.9

eval "$(micromamba shell hook --shell bash)"
micromamba activate myenv

srun python train.py

Next steps