This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

System specifications

Overview of DAIC system specifications and comparison with other TU Delft clusters.

This section provides an overview of the Delft AI Cluster (DAIC) infrastructure and its comparison with other compute facilities at TU Delft.

DAIC partitions and access/usage best practices

DAIC partitions and access/usage best practices

1 - Login Nodes

Overview of DAIC login nodes and appropriate usage guidelines.

Login nodes act as the gateway to the DAIC cluster. They are intended for lightweight tasks such as job submission, file transfers, and compiling code (on specific nodes). They are not designed for running resource-intensive jobs, which should be submitted to the compute nodes.

Specifications and usage notes

HostnameCPU (Sockets x Model)Total CoresTotal RAMOperating SystemGPU TypeGPU CountUsage Notes
login11 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz815.39 GBOpenShift EnterpriseQuadro K22001For file transfers, job submission, and lightweight tasks.
login21 x Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz13.70 GBOpenShift EnterpriseN/AN/AVirtual server, for non-intensive tasks. No compilation.
login32 x Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz32503.60 GBRHEVQuadro K22001For large compilation and interactive sessions.

2 - Compute nodes

The foundational hardware components of DAIC.

DAIC compute nodes are high-performance servers with multiple CPUs, large memory, and, on many nodes, one or more GPUs. The cluster is heterogeneous: nodes vary in processor types, memory sizes, GPU configurations, and performance characteristics.

If your application requires specific hardware features, you must request them explicitly in your job script (see Submitting jobs).

CPUs

All compute nodes have multiple CPUs (sockets), each with multiple cores. Most nodes support hyper-threading, which allows two threads per physical core. The number of cores per node is listed in the List of all nodes section.

Request CPUs based on how many threads your program can use. Oversubscribing doesn’t improve performance and may waste resources. Undersubscribing may slow your job due to thread contention.

To request CPUs for your jobs, see Job scripts.

GPUs

Many nodes in DAIC include one or more NVIDIA GPUs.GPU types differ in architecture, memory size, and compute capability. The table that follows summarizes the main GPU types in DAIC. For a per-node overview, see the List of all nodes section.

To request GPUs in your job, use --gres=gpu:<type>:<count>. See GPU jobs for more information.

Table 1: Counts and specifications of DAIC GPUs
GPU (slurm) type
CountModelArchitectureCompute CapabilityCUDA coresMemory
l4018NVIDIA L40Ada Lovelace8.91817649152 MiB
a4084NVIDIA A40Ampere8.61075246068 MiB
turing24NVIDIA GeForce RTX 2080 TiTuring7.5435211264 MiB
v10011Tesla V100-SXM2-32GBVolta7.0512032768 MiB

In table 1: the headers denote:

Model
The official product name of the GPU
Architecture
The hardware design used in the GPU, which defines its specifications and performance characteristics. Each architecture (e.g., Ampere, Turing, Volta) represents a different GPU generation.
Compute capability
A version number indicating the features supported by the GPU, including CUDA support. Higher values offer more advanced functionality.
CUDA cores
The number of processing cores available on the GPU. More CUDA cores allow more parallel operations, improving performance for parallelizable workloads.
Memory
The total internal memory on the GPU. This memory is required to store data for GPU computations. If a model’s memory is insufficient, performance may be severely affected.

Memory

Each node has a fixed amount of RAM, shown in the List of all nodes section. Jobs may only use the memory explicitly requested using --mem or --mem-per-cpu. Exceeding the allocation may result in job failure.

Memory cannot be shared across nodes, and unused memory cannot be reallocated.

For memory-efficient jobs, consider tuning your requested memorey to match your code’s peak usage closely. Fore more information, see Slurm basics.

List of all nodes

The following table gives an overview of current nodes and their characteristics. Use the search bar to filter by hostname, GPU type, or any other column, and select columns to be visible.

HostnameCPU (Sockets x Model)Cores per SocketTotal CoresCPU Speed (MHz)Total RAMLocal Disk (/tmp)GPU TypeGPU CountInfinibandSlurmPartitionsSlurmActiveFeatures
100plus2 x Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz16322097.594755.585 GB3,1TN/A0Nogeneral;ewi-insyavx;avx2;ht;10gbe;bigmem
3dgi11 x AMD EPYC 7502P 32-Core Processor32322500251.41 GB148GN/A0Nogeneral;bk-ur-udsavx;avx2;ht;10gbe;ssd
3dgi21 x AMD EPYC 7502P 32-Core Processor32322500251.41 GB148GN/A0Nogeneral;bk-ur-udsavx;avx2;ht;10gbe;ssd
awi012 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362833.728376.384 GB393GTesla V100-PCIE-32GB1Nogeneral;tnw-imphysavx;avx2;ht;10gbe;avx512;gpumem32;nvme;ssd
awi022 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282820.703503.619 GB393GTesla V100-SXM2-16GB2Yesgeneral;tnw-imphysavx;avx2;ht;10gbe;bigmem;ssd
awi042 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951503.625 GB5,4TN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi082 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951503.625 GB5,4TN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi092 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951503.625 GB5,4TN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi102 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951503.625 GB5,4TN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi112 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951503.625 GB5,4TN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi122 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951503.625 GB5,4TN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi192 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251.641 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi202 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251.641 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi212 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251.641 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi222 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251.641 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi232 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362745.709376.385 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi242 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362636.492376.385 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi252 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362644.635376.385 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
awi262 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362669.061376.385 GB856GN/A0Yesgeneral;tnw-imphysavx;avx2;ht;ib;ssd
cor12 x Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz16323452.1481510.33 GB7,0TTesla V100-SXM2-32GB8Nogeneral;me-coravx;avx2;ht;10gbe;avx512;gpumem32;ssd
gpu012 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu022 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu032 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu042 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu052 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu062 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu072 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu082 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu092 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;tnw-imphysavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu102 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nogeneral;tnw-imphysavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu112 x AMD EPYC 7413 24-Core Processor24482650503.408 GB415GNVIDIA A403Nobk-ur-uds;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu122 x AMD EPYC 7413 24-Core Processor24482650503.407 GB415GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu142 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu152 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu162 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu172 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu182 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nogeneral;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu192 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu202 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu212 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Noewi-insy-prb;general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu222 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu232 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu242 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Nogeneral;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu252 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Nommll;general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu262 x AMD EPYC 7543 32-Core Processor326428001007.24 GB856GNVIDIA A403Nolr-asm;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu272 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nome-cor;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu282 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nome-cor;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu292 x AMD EPYC 7543 32-Core Processor32642800503.275 GB856GNVIDIA A403Nome-cor;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu301 x AMD EPYC 9534 64-Core Processor64642450755.228 GB856GNVIDIA L403Noewi-insy;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu311 x AMD EPYC 9534 64-Core Processor64642450755.228 GB856GNVIDIA L403Noewi-insy;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu321 x AMD EPYC 9534 64-Core Processor64642450755.228 GB856GNVIDIA L403Noewi-me-sps;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu331 x AMD EPYC 9534 64-Core Processor64642450755.228 GB856GNVIDIA L403Nolr-co;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu341 x AMD EPYC 9534 64-Core Processor64642450755.228 GB856GNVIDIA L403Noewi-insy;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu351 x AMD EPYC 9534 64-Core Processor64642450755.228 GB856GNVIDIA L403Nobk-ar;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
grs12 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163499.804251.633 GB181GN/A0Yescitg-grs;generalavx;avx2;ht;ib;ssd
grs22 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163499.804251.633 GB181GN/A0Yescitg-grs;generalavx;avx2;ht;ib;ssd
grs32 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163499.804251.633 GB181GN/A0Yescitg-grs;generalavx;avx2;ht;ib;ssd
grs42 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163500251.633 GB181GN/A0Yescitg-grs;generalavx;avx2;ht;ib;ssd
influ12 x Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz16322320.971376.391 GB197GNVIDIA GeForce RTX 2080 Ti8Noinfluence;ewi-insy;generalavx;avx2;ht;10gbe;avx512;nvme;ssd
influ22 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300187.232 GB369GNVIDIA GeForce RTX 2080 Ti4Noinfluence;ewi-insy;generalavx;avx2;ht;10gbe;avx512;ssd
influ32 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300187.232 GB369GNVIDIA GeForce RTX 2080 Ti4Noinfluence;ewi-insy;generalavx;avx2;ht;10gbe;avx512;ssd
influ42 x AMD EPYC 7452 32-Core Processor32642350251.626 GB148GN/A0Noinfluence;ewi-insy;generalavx;avx2;ht;10gbe;ssd
influ52 x AMD EPYC 7452 32-Core Processor32642350503.61 GB148GN/A0Noinfluence;ewi-insy;generalavx;avx2;ht;10gbe;bigmem;ssd
influ62 x AMD EPYC 7452 32-Core Processor32642350503.61 GB148GN/A0Noinfluence;ewi-insy;generalavx;avx2;ht;10gbe;bigmem;ssd
insy152 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300754.33 GB416GNVIDIA GeForce RTX 2080 Ti4Noewi-insy;generalavx;avx2;ht;10gbe;avx512;bigmem;ssd
insy162 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300754.33 GB416GNVIDIA GeForce RTX 2080 Ti4Noewi-insy;generalavx;avx2;ht;10gbe;avx512;bigmem;ssd
Total (66 nodes)3016 cores35.02 TiB76.79 TiB137 GPU

3 - Storage

What are the foundational components of DAIC?

Storage

DAIC compute nodes have direct access to the TU Delft home, group and project storage. You can use your TU Delft installed machine or an SCP or SFTP client to transfer files to and from these storage areas and others (see data transfer) , as is demonstrated throughout this page.

File System Overview

Unlike TU Delft’s DelftBlue , DAIC does not have a dedicated storage filesystem. This means no /scratch space for storing temporary files (see DelftBlue’s Storage description and Disk quota and scratch space ). Instead, DAIC relies on direct connection to the TU Delft network storage filesystem (see Overview data storage ) from all its nodes, and offers the following types of storage areas:

Personal storage (aka home folder)

The Personal Storage is private and is meant to store personal files (program settings, bookmarks). A backup service protects your home files from both hardware failures and user error (you can restore previous versions of files from up to two weeks ago). The available space is limited by a quota (see Quotas) and is not intended for storing research data.

You have two (separate) home folders: one for Linux and one for Windows (because Linux and Windows store program settings differently). You can access these home folders from a machine (running Linux or Windows OS) using a command line interface or a browser via TU Delft's webdata . For example, Windows home has a My Documents folder. My documents can be found on a Linux machine under /winhome/<YourNetID>/My Documents

Home directoryAccess fromStorage location
Linux  home folder
Linux/home/nfs/<YourNetID>
Windowsonly accessible using an scp/sftp client (see SSH access)
webdatanot available
Windows home folder
Linux/winhome/<YourNetID>
WindowsH: or \\tudelft.net\staff-homes\[a-z]\<YourNetID>
webdatahttps://webdata.tudelft.nl/staff-homes/[a-z]/<YourNetID>

It’s possible to access the backups yourself. In Linux the backups are located under the (hidden, read-only) ~/.snapshot/ folder. In Windows you can right-click the H: drive and choose Restore previous versions.

Group storage

The Group Storage is meant to share files (documents, educational and research data) with department/group members. The whole department or group has access to this storage, so this is not for confidential or project data. There is a backup service to protect the files, with previous versions up to two weeks ago. There is a Fair-Use policy for the used space.

DestinationAccess fromStorage location
Group Storage
Linux/tudelft.net/staff-groups/<faculty>/<department>/<group> or
/tudelft.net/staff-bulk/<faculty>/<department>/<group>/<NetID>
WindowsM: or \\tudelft.net\staff-groups\<faculty>\<department>\<group> or
L: or \\tudelft.net\staff-bulk\ewi\insy\<group>\<NetID>
webdatahttps://webdata.tudelft.nl/staff-groups/<faculty>/<department>/<group>/

Project Storage

The Project Storage is meant for storing (research) data (datasets, generated results, download files and programs, …) for projects. Only the project members (including external persons) can access the data, so this is suitable for confidential data (but you may want to use encryption for highly sensitive confidential data). There is a backup service and a Fair-Use policy for the used space.

Project leaders (or supervisors) can request a Project Storage location via the Self-Service Portal or the Service Desk .

DestinationAccess fromStorage location
Project Storage
Linux/tudelft.net/staff-umbrella/<project>
WindowsU: or \\tudelft.net\staff-umbrella\<project>
webdatahttps://webdata.tudelft.nl/staff-umbrella/<project> or
https://webdata.tudelft.nl/staff-bulk/<faculty>/<department>/<group>/<NetID>

Local Storage

Local storage is meant for temporary storage of (large amounts of) data with fast access on a single computer. You can create your own personal folder inside the local storage. Unlike the network storage above, local storage is only accessible on that computer, not on other computers or through network file servers or webdata. There is no backup service nor quota. The available space is large but fixed, so leave enough space for other users. Files under /tmp that have not been accessed for 10 days are automatically removed.

DestinationAccess fromStorage location
Local storage
Linux/tmp/<NetID>
Windowsnot available
webdatanot available

Memory Storage

Memory storage is meant for short-term storage of limited amounts of data with very fast access on a single computer. You can create your own personal folder inside the memory storage location. Memory storage is only accessible on that computer, and there is no backup service nor quota. The available space is limited and shared with programs, so leave enough space (the computer will likely crash when you don’t!). Files that have not been accessed for 1 day are automatically removed.

DestinationAccess fromStorage location
Memory storage
Linux/dev/shm/<NetID>
Windowsnot available
webdatanot available

Checking quota limits

The different storage areas accessible on DAIC have quotas (or usage limits). It’s important to regularly check your usage to avoid job failures and ensure smooth workflows.

Helpful commands

  • For /home:
$ quota -s -f ~
Disk quotas for user netid (uid 000000): 
     Filesystem   space   quota   limit   grace   files   quota   limit   grace
svm111.storage.tudelft.net:/staff_homes_linux/n/netid
                 12872M  24576M  30720M           19671   4295m   4295m  
  • For project space: You can use either:
$ du -hs /tudelft.net/staff-umbrella/my-cool-project
37G	/tudelft.net/staff-umbrella/my-cool-project

Or:

$ df -h /tudelft.net/staff-umbrella/my-cool-project
Filesystem                                       Size  Used Avail Use% Mounted on
svm107.storage.tudelft.net:/staff_umbrella_my-cool-project  1,0T   38G  987G   4% /tudelft.net/staff-umbrella/my-cool-project

Note that the difference is due to snapshots, which can stay for up to 2 weeks

4 - Scheduler

What are the foundational components of DAIC?

Workload scheduler

DAIC uses the Slurm scheduler to efficiently manage workloads. All jobs for the cluster have to be submitted as batch jobs into a queue. The scheduler then manages and prioritizes the jobs in the queue, allocates resources (CPUs, memory) for the jobs, executes the jobs and enforces the resource allocations. See the job submission pages for more information.

A slurm-based cluster is composed of a set of login nodes that are used to access the cluster and submit computational jobs. A central manager orchestrates computational demands across a set of compute nodes. These nodes are organized logically into groups called partitions, that defines job limits or access rights. The central manager provides fault-tolerant hierarchical communications, to ensure optimal and fair use of available compute resources to eligible users, and make it easier to run and schedule complex jobs across compute resources (multiple nodes).

5 - Cluster comparison

Overview of the clusters available to TU Delft researchers, including DAIC, DelftBlue, and DAS.

Cluster comparison

TU Delft clusters

DAIC is one of several clusters accessible to TU Delft CS researchers (and their collaborators). The table below gives a comparison between these in terms of use case, eligible users, and other characteristics.

DAICDelftBlueDAS
Primary use casesResearch, especially in AIResearch & EducationDistributed systems research, streaming applications, edge and fog computing, in-network processing, and complex security and trust policies, Machine learning research, ...
ContributorsCertain groups within TU Delft (see Contributing departments)All TU Delft facultiesMultiple universities & SURF
Eligible users
  • Faculty, PhD students, and researchers from contributing departments
  • MSc and BSc students (if recommended by a professor) are provided limited access
All TU Delft affiliates
  • Faculty and PhD students who are either members of the ASCI research school or the ASCI partner universities
  • ASTRON employees
  • NLeSC employees
  • Master students (if recommended by a professor) are provided limited access
WebsiteDAIC documentationDelftBlue DocumentationDAS Documentation
Contact infoDAIC communityDHPC teamDAS admin
Request accountAccess and accountsGet an accountEmail DAS admin with details like user's affiliation and the planned purpose of the account.
Getting startedQuickstartCrash course
HardwareSystem specificationsDHPC hardwareHead node +
  • 16 x FAT nodes (Lenovo SR665, dual socket, 2x16 core, 128 GB memory, 1xA4000)
  • 4 x GPU nodes (Lenovo SR665, dual socket, 2x16 core, 128 GB memory, 1xA5000)
Software stackSoftwareDHPC modulesBase OS: Rocky Linux, OpenHPC, Slurm Workload Manager
Data storageStorageStorageStorage: 128 TB (RAID6)
Access to TU Delft Network storageOnly in login nodesNot supported
Sharing data in collaboration
Has GPUs?
Cost of useContribution towards hardware purchase-

SURF clusters

SURF, the collaborative organization for IT in Dutch education and research, has installed and is currently operating the Dutch National supercomputer, Snellius, which houses 144 40GB A100 GPUs as of Q3 2021 (36 gcn nodes x 4 A100 GPUs/node = 144 A100 GPUs total) with other specs detailed in the Snellius hardware and file systems wiki.

SURF also operates other clusters like Spider for processing large structured data sets, and ODISSEI Secure Supercomputer (OSSC) for large-scale analyses of highly-sensitive data. For an overview of SURF clusters, see the SURF wiki.

TU Delft researchers in TBM and CITG already have direct and easy access to the compute power and data services of SURF, while members of other faculties need to apply for access as detailed in SURF’s guide to Apply for access to compute services.

TU Delft cloud resources

For both education and research activities, TU Delft has established the Cloud4Research program. Cloud4Research aims to facilite the use of public cloud resources, primarily Amazon AWS. At the administrative level, Cloud4Research provides AWS accounts with an initial budget. Subsequent billing can be incurred via a project code, instead of a personal credit card. At the technical level, the ICT innovation teams provides intake meetings to facilitate getting started. Please refer to the Policies and FAQ pages for more details.