Quickstart
Get started with the Delft AI Cluster (DAIC)
Follow these steps to start using DAIC. Complete the prerequisites first, then work through the getting started guides.
Need help? Join DAIC Mattermost for support and announcements.
1 - First Login
Log in to DAIC over SSH and verify your account is ready.
Before you begin
You need:
- a TU Delft NetID with access to the environment
- access to the TU Delft network, either on campus or through eduVPN
Outside TU Delft network?
Activate TU Delft network via
eduVPN.
Login via SSH
- Open your terminal and run the following SSH command:
$ ssh <YourNetID>@daic01.hpc.tudelft.nl
- You will be prompted for your password. On first login, DAIC may also set up parts of your home environment:
The HPC cluster is restricted to authorized users only.
<YourNetID>@daic01.hpc.tudelft.nl's password:
Authorized users only. All activity may be monitored and reported.
Welcome to this HPC cluster.
Since this is the first time you logon, we will help you setup your environment.
***************************************************************************
This script will create symbolic links to other personal storage locations.
***************************************************************************
Setting /home/<YourNetID>/linuxhome to /tudelft.net/staff-homes-linux/a/<YourNetID>
First login may take a moment
It might take a few seconds to get your $HOME set. This is normal.Verify login worked
Run the following commands:
$ hostname
daic01.hpc.tudelft.nl
$ pwd
/home/<YourNetID>
$ whoami
<YourNetID>
If these commands return the expected values, your login was successful.
Home directory quota: 5 MB
Your home directory (/home/<YourNetID>) has a 5 MB quota - only for config files.
Use ~/linuxhome or project storage for data and code. See Storage.
Troubleshooting
- If you get
Permission denied errors:- check you are within the TU Delft network (directly or via eduVPN),
- check that you are using the correct NetID.
- If both are correct and login still fails, your account may not be active yet.
- If login still fails, contact support.
Next steps
- Passwordless SSH - Connect without a password
- Shell Setup - Configure your environment
- First Job - Submit a batch job
2 - Passwordless SSH
Set up SSH keys to connect without a password.
Generate an SSH key (on your local machine)
$ ssh-keygen -t ed25519 -C "your.email@tudelft.nl"
Press Enter to accept defaults. Optionally set a passphrase.
Copy your public key to DAIC
$ ssh-copy-id <YourNetID>@daic01.hpc.tudelft.nl
Test passwordless login
$ ssh <YourNetID>@daic01.hpc.tudelft.nl
You should connect without entering a password.
Kerberos for network storage
Important
SSH key login does not create a Kerberos ticket. Without a ticket, you cannot access network storage (~/linuxhome, project storage, etc.).After connecting with SSH keys, run:
$ kinit
Password for <YourNetID>@TUDELFT.NET:
Verify your ticket:
$ klist
Ticket cache: KCM:656519
Default principal: <YourNetID>@TUDELFT.NET
Valid starting Expires Service principal
03/23/26 11:05:12 03/23/26 21:05:12 krbtgt/TUDELFT.NET@TUDELFT.NET
First access delay
Network storage may take up to 30 seconds to mount on first access. If you see “Stale file handle”, wait and retry.SSH config shortcut
Add to ~/.ssh/config on your local machine:
Host daic
HostName daic01.hpc.tudelft.nl
User <YourNetID>
Then connect with just:
Next steps
3 - Shell Setup
Configure your shell environment on DAIC.
DAIC configuration (recommended)
Redirect caches and tools to linuxhome to avoid filling your 5MB home quota:
$ wget https://gitlab.ewi.tudelft.nl/daic/docs-experimental/-/raw/main/scripts/.daicrc -O ~/.daicrc
$ echo 'source ~/.daicrc' >> ~/.bashrc
$ source ~/.bashrc
This configures storage paths for:
- UV, Pixi, Conda, Mamba, Micromamba
- Apptainer, pip, npm
- Hugging Face, Jupyter
- XDG directories
Shell aliases (optional)
For useful SLURM aliases and shell styling:
$ wget https://gitlab.ewi.tudelft.nl/reit/shell-config/-/raw/main/.bashrc-reit -O ~/.bashrc-reit
$ echo 'source ~/.bashrc-reit' >> ~/.bashrc
$ source ~/.bashrc
Test the installation:
$ reit
Hi, from the Research Engineering and Infrastructure Team!!!
See shell-config for details.
The .daicrc file sets these environment variables:
| Variable | Path |
|---|
XDG_CACHE_HOME | ~/linuxhome/.cache |
UV_CACHE_DIR | ~/linuxhome/.cache/uv |
PIXI_HOME | ~/linuxhome/.pixi |
CONDA_PKGS_DIRS | ~/linuxhome/.conda/pkgs |
APPTAINER_CACHEDIR | ~/linuxhome/.cache/apptainer |
PIP_CACHE_DIR | ~/linuxhome/.cache/pip |
HF_HOME | ~/linuxhome/.cache/huggingface |
Next steps
4 - First DAIC Job
Submit your first job to DAIC with SLURM and check the results.
Before you begin
You should already be logged in to DAIC. If not, complete First Login.
Submit a simple job
1. Create a working directory
$ mkdir -p ~/my-first-job && cd ~/my-first-job
2. Create a Python script
import socket
print(f"Hello from {socket.gethostname()}")
3. Create a SLURM batch script
#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --time=0:05:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=1G
#SBATCH --output=slurm_%j.out
srun python hello.py
Account required
Replace <your-account> with your account name. Find yours with:
$ sacctmgr show associations user=$USER format=Account -P
4. Submit the job
$ sbatch submit.sh
Submitted batch job 294
5. Monitor the job
$ squeue -u $USER
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
294 all submit.s netid01 R 0:02 1 gpu23
6. Check the output
Once completed, view the output file:
$ cat slurm_294.out
Hello from gpu23.ethernet.tudhpc
Request a GPU
To run GPU jobs, add the --gres flag to your batch script:
#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --time=0:30:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G
#SBATCH --gres=gpu:1
#SBATCH --output=slurm_%j.out
srun nvidia-smi
srun python my_gpu_script.py
Next steps
5 - Python Setup
Set up Python environments on DAIC.
Option 1: Use modules (recommended for common packages)
Load pre-installed Python packages:
$ module load 2025/gpu
$ module load py-torch/2.5.1
$ module load py-numpy/1.26.4
$ python -c "import torch; print(torch.__version__)"
2.5.1
Option 2: UV (recommended for custom packages)
UV is a fast Python package manager.
Run shell setup first
If you haven’t already, run the
shell setup to configure storage paths.
Install UV
$ curl -LsSf https://astral.sh/uv/install.sh | sh
Create a project
$ cd /tudelft.net/staff-umbrella/<project>
$ uv init myproject
$ cd myproject
$ uv add torch numpy pandas
Run scripts
Install Python tools globally (ruff, black, jupyter, etc.):
$ uv tool install ruff
$ uv tool install jupyter
$ ruff --version
$ jupyter lab
List installed tools:
Use in batch scripts
#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --gres=gpu:1
module purge
module load 2025/gpu cuda/12.9
cd /tudelft.net/staff-umbrella/<project>/myproject
srun uv run python train.py
Storage location
Create projects in project storage, not home directory (5MB quota).Option 3: Pixi (conda alternative)
Pixi is a fast conda-compatible package manager.
Install Pixi
$ curl -fsSL https://pixi.sh/install.sh | sh
Create a project
$ cd /tudelft.net/staff-umbrella/<project>
$ pixi init myproject
$ cd myproject
$ pixi add python pytorch numpy
Run commands
$ pixi run python train.py
$ pixi shell # activate environment
Use in batch scripts
#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --gres=gpu:1
module purge
module load 2025/gpu cuda/12.9
cd /tudelft.net/staff-umbrella/<project>/myproject
srun pixi run python train.py
Option 4: Virtual environment
$ module load 2025/gpu
$ python -m venv /tudelft.net/staff-umbrella/<project>/venvs/myenv --system-site-packages
$ source /tudelft.net/staff-umbrella/<project>/venvs/myenv/bin/activate
$ pip install transformers
Option 5: Micromamba (global conda environments)
Micromamba is a lightweight conda implementation. Use it when you need traditional conda environments shared across projects.
Install Micromamba
$ "${SHELL}" <(curl -L micro.mamba.pm/install.sh)
Follow the prompts. When asked for install location, use your project storage:
/tudelft.net/staff-umbrella/<project>/micromamba
Create and use environments
$ micromamba create -n myenv python=3.11 pytorch numpy -c conda-forge -c pytorch
$ micromamba activate myenv
$ python -c "import torch; print(torch.__version__)"
Use in batch scripts
#!/bin/bash
#SBATCH --account=<your-account>
#SBATCH --partition=all
#SBATCH --gres=gpu:1
module purge
module load 2025/gpu cuda/12.9
eval "$(micromamba shell hook --shell bash)"
micromamba activate myenv
srun python train.py
When to use which tool
- UV: Best for most projects. Fast, lockfiles, reproducible.
- Pixi: When you need conda-forge packages in a project.
- Micromamba: When you need shared environments across projects.
Next steps