UD’s IT Research Computing team maintains several community clusters, co-owned by faculty members or groups. DSI owns a number of nodes on the most recently deployed “Caviness” cluster and has access to the new DARWIN cluster. UD Faculty members can request access to the DSI owned nodes for their research groups through DSI (see below).
DSI Sponsored Compute Resources
DARWIN Compute and Storage System
The DARWIN system is a new system currently available to early access users. It is designed to support research with diverse technology requirements often found in interdisciplinary sciences such as high I/O and low latency, scalability, GPU enabled execution, visualization acceleration, high memory workloads, and large local scratch space.
Allocation requests for UD investigators
Startup allocations can currently be requested via this web form. For more information about what kinds of allocations can be requested and how to apply please see the Allocation Information/Guidelines.
Allocation requests for external partners
Currently, allocations for non-profit partners are free. For-profit partners please contact firstname.lastname@example.org.
Allocation requests will be made via the National Science Foundation’s XSEDE Resource Allocations Committee (XRAC). Requests can be made for Startup, Education, or Research allocations. Please use the XSEDE user portal for submitting allocation requests. You will need to create a (free) XSEDE account. Information on DARWIN can be found by scrolling down on the left-hand side of the XSEDE resource page.
We recommend that users begin by requesting a Startup Allocation. Requests for Startup and Educational Allocations can be submitted any time. Research Allocation requests, which can be larger, will be reviewed by the XRAC quarterly. See XSEDE’s Research Allocation website for submission deadlines.
Acceptable Usage Agreement
All users agree to abide by the Acceptable Usage Agreement (internal users) and XSEDE Usage Policy (external users). For general questions/problems with allocations please contact email@example.com.
Caviness Community Cluster
The Caviness cluster, UD’s third Community High-Performance Computing (HPC) Cluster, is a distributed-memory Linux cluster. The first-generation of Caviness consists of containing 194 compute nodes aggregating to 7,336 Intel CPU cores; 110,080 CUDA cores; 4,800 Turing tensor cores; 46TiB RAM; 450TiB high-speed lustre scratch; and 225TiB NFS workgroup long-term storage. As a Caviness stakeholder, DSI owns two nodes with the following specification: 2 * 20C Intel Zeon Gold 6230; 768GiB RAM; 750GiB local scratch; nVidia T4 GPU; OmniPath high-speed networking.
To request an allocation from Caveness please use the Allocation Request Application. All users agree to abide by the Acceptable Usage Agreement. For questions/problems with allocation for DSI sponsored compute resources, send an email to firstname.lastname@example.org.
DELL OneAPI Precision Workstation
As part of the DELL Seed Unit program, DSI has access to a DELL oneAPI Precision Workstation to allow for users to test out the Intel oneAPI Toolkit. oneAPI is a single, unified programming model that aims to simplify development across different hardware architectures: CPUs, GPUs, FPGAs, AI accelerators, and more. oneAPI provides libraries for compute and data intensive domains such as deep learning, scientific computing, video analytics, and media processing. The system is a workstation suitable for education and testing purposes to get familiar with the toolkits. If you or your students are interested in testing out this system, send and email to email@example.com. Training and workshops are forthcoming.
Other UD Compute Resources
BioMix High-Performance Computing (HPC) Cluster
BioMix is a special purpose HPC system geared toward bioinformatics applications. It has >600 CPU cores with a combined 3.5TB of RAM and 372TB of disk space to support bioinformatics analysis. Included are 425 CPU cores which are made freely available to researchers as part of our Linux-based BioMix cluster, where all nodes are connected via 1 or 10 gigabit Ethernet and have access to a 20TB shared RAID array. A combination of system types allows the choice of systems best suited for the particular needs of a given analysis. These include numerous nodes configured for memory intensive computing applications with 128 – 512GB of RAM per machine.
Farber High-Performance Computing (HPC) Cluster
Farber is a predecessor HPC system to Caviness. It’s a distributed-memory, Linux cluster consisting of 100 compute nodes, which total 2000 “Ivy Bridge” cores and 6.4 TB RAM. Storage options include a 288TB Lustre filesystem for on-line processing, and a 72TB NFS filesystem for workgroup and long-term storage. The cluster is located in the University’s core data center; it is supported by the central IT department, with redundant power, cooling, and network connectivity. It has a 56Gbps (FDR) InfiniBand network for MPI and Luster fast disk access, a 1Gbps TCP/IP network for scheduling and NFS, and 2 10Gbps links to the Core UD network.
The National Science Foundation (NSF)’s XSEDE project (xsede.org) facilitates access to NSF supported computational resources. Access is free to projects of all funding sources (with some preference given to NSF projects). A startup location can be requested through a short proposal at any time. Larger allocations require a proposal to the XSEDE Resource Allocation Committee (XRAC), with quarter-annual submission deadlines – see portal.xsede.org/allocations. Proposals can also include requests for time of computational experts.
The Department of Energy (DOE)’s INCITE program makes available high-performance computing resources to researchers conducting projects in DOE’s mission, which is fairly broad. Proposals are accepted annually with a deadline typically in June – see www.doeleadershipcomputing.org
Other agencies may also offer access to their computing and data resources for projects in their mission. Examples:
- NASA’s High-end Computing program – see https://www.hec.nasa.gov.
- NSF’s Geosciences Directorate maintains its separate computing resources for university researchers and NCAR scientists in atmospheric and related sciences see www2.cisl.ucar.edu/user-support/allocations
Cloud Computing resources
Under development: DSI is considering obtaining an allocation of cloud computing resources for use by UD faculty members for DSI-related projects. We’d like to hear from UD members who have experience obtaining and using such resources. Please contact us at firstname.lastname@example.org.