Computational Resources
UD’s IT Research Computing team maintains several community clusters, co-owned by faculty members or groups. DSI owns a number of nodes on the most recently deployed “Caviness” cluster and has access to the new DARWIN cluster. UD Faculty members can request access to the DSI owned nodes for their research groups through DSI (see below).
DSI Sponsored Compute Resources
DARWIN Compute and Storage System
The DARWIN system is a new system currently available to early access users. It is designed to support research with diverse technology requirements often found in interdisciplinary sciences such as high I/O and low latency, scalability, GPU enabled execution, visualization acceleration, high memory workloads, and large local scratch space.
Allocation requests for UD investigators
Startup allocations can currently be requested via this Startup Allocation Request web form. Research Allocation requests can be submitted via this Research Allocation Request web form. To make adjustments to existing allocations, including allocation renewals, use the Allocation Extension web form. Education Allocations for courses or workshops can be submitted via this Education Allocation web form. For more information about what kinds of allocations can be requested and how to apply please see the Allocation Information/Guidelines. Currently, allocations for UD investigators are free.
Allocation requests for external partners
Currently, allocations for non-profit partners are free. For-profit partners please see the Fees page for details.
The NSF program ACCESS has replaced the old XSEDE program but it operates in much the same way.
Allocation requests will be made via the National Science Foundation’s ACCESS Allocation Review Committee (AARC). Please use the ACCESS allocations portal for submitting allocation requests. You will need to create a (free) ACCESS account. If you have an existing XSEDE portal account, your XSEDE account is now your ACCESS account. Information on DARWIN can be found by scrolling down on the ACCESS resource page.
Allocation requests are divided into 4 levels (called opportunities) and are accepted and reviewed in an ongoing basis.
Acceptable Usage Agreement
All users agree to abide by the Acceptable Usage Agreement (internal users) and ACCESS Acceptable Use (external users). For general questions/problems with allocations please contact darwin-info@udel.edu.
Acknowledgement
We require all allocation recipients to acknowledge their allocation awards using the following standard text: “This research was supported in part through the use of DARWIN computing system: DARWIN – A Resource for Computational and Data-intensive Research at the University of Delaware and in the Delaware Region, Rudolf Eigenmann, Benjamin E. Bagozzi, Arthi Jayaraman, William Totten, and Cathy H. Wu, University of Delaware, 2021, URL: https://udspace.udel.edu/handle/19716/29071″
Caviness Community Cluster
The Caviness cluster, UD’s third Community High-Performance Computing (HPC) Cluster, is a distributed-memory Linux cluster. The first-generation of Caviness consists of containing 194 compute nodes aggregating to 7,336 Intel CPU cores; 110,080 CUDA cores; 4,800 Turing tensor cores; 46TiB RAM; 450TiB high-speed lustre scratch; and 225TiB NFS workgroup long-term storage. As a Caviness stakeholder, DSI owns two nodes with the following specification: 2 * 20C Intel Zeon Gold 6230; 768GiB RAM; 750GiB local scratch; nVidia T4 GPU; OmniPath high-speed networking.
To request an allocation from Caviness please use the Allocation Request Application. All users agree to abide by the Acceptable Usage Agreement. For questions/problems with allocation for DSI sponsored compute resources, send an email to dsiallocations@udel.edu.
DELL OneAPI Precision Workstation
As part of the DELL Seed Unit program, DSI has access to a DELL oneAPI Precision Workstation to allow for users to test out the Intel oneAPI Toolkit. oneAPI is a single, unified programming model that aims to simplify development across different hardware architectures: CPUs, GPUs, FPGAs, AI accelerators, and more. oneAPI provides libraries for compute and data intensive domains such as deep learning, scientific computing, video analytics, and media processing. The system is a workstation suitable for education and testing purposes to get familiar with the toolkits. If you or your students are interested in testing out this system, send and email to jcowart@udel.edu. Training and workshops are forthcoming.
Acknowledgement
We require all users who are granted use of DSI sponsored resources listed above to acknowledge their allocation awards:
For Darwin awards: “This research was supported in part through the use of Data Science Institute (DSI) computational resources at the University of Delaware”
Other UD Compute Resources
BioMix High-Performance Computing (HPC) Cluster
BioMix is a special purpose HPC system geared toward bioinformatics applications. It has >600 CPU cores with a combined 3.5TB of RAM and 372TB of disk space to support bioinformatics analysis. Included are 425 CPU cores which are made freely available to researchers as part of our Linux-based BioMix cluster, where all nodes are connected via 1 or 10 gigabit Ethernet and have access to a 20TB shared RAID array. A combination of system types allows the choice of systems best suited for the particular needs of a given analysis. These include numerous nodes configured for memory intensive computing applications with 128 – 512GB of RAM per machine.
Farber High-Performance Computing (HPC) Cluster
Farber is a predecessor HPC system to Caviness. It’s a distributed-memory, Linux cluster consisting of 100 compute nodes, which total 2000 “Ivy Bridge” cores and 6.4 TB RAM. Storage options include a 288TB Lustre filesystem for on-line processing, and a 72TB NFS filesystem for workgroup and long-term storage. The cluster is located in the University’s core data center; it is supported by the central IT department, with redundant power, cooling, and network connectivity. It has a 56Gbps (FDR) InfiniBand network for MPI and Luster fast disk access, a 1Gbps TCP/IP network for scheduling and NFS, and 2 10Gbps links to the Core UD network.
National/External Resources
The National Science Foundation (NSF)’s XSEDE project (xsede.org) facilitates access to NSF supported computational resources. Access is free to projects of all funding sources (with some preference given to NSF projects). A startup location can be requested through a short proposal at any time. Larger allocations require a proposal to the XSEDE Resource Allocation Committee (XRAC), with quarter-annual submission deadlines – see portal.xsede.org/allocations. Proposals can also include requests for time of computational experts.
The Department of Energy (DOE)’s INCITE program makes available high-performance computing resources to researchers conducting projects in DOE’s mission, which is fairly broad. Proposals are accepted annually with a deadline typically in June – see www.doeleadershipcomputing.org
Other agencies may also offer access to their computing and data resources for projects in their mission. Examples:
- NASA’s High-end Computing program – see https://www.hec.nasa.gov.
- NSF’s Geosciences Directorate maintains its separate computing resources for university researchers and NCAR scientists in atmospheric and related sciences see www2.cisl.ucar.edu/user-support/allocations
Cloud Computing resources
Under development: DSI is considering obtaining an allocation of cloud computing resources for use by UD faculty members for DSI-related projects. We’d like to hear from UD members who have experience obtaining and using such resources. Please contact us at dsi-core@udel.edu.