High-performance computing is handled by the CSDE pool in Hyak, the UW-Wide High Performance Computing cluster. All Hyak access is authenticated by UW NetID and requires two-factor authentication via “DUO”. If you are a UW student, you can also join the UW HPC club and access the larger STF Hyak pool. (You can still connect from the CSDE Unix systems to Hyak if you use this allocation pool.)
To use the CSDE Hyak nodes (Klone is latest generation, as of 4/2021), all of the following are required:
- CSDE Computing admins must add your UW NetID to the group “u_hyak_csde” to enable access. Request Hyak access from email@example.com.
- You need to have DUO two-factor enrollment enabled on your UWNETID. CSDE Help must do this on your behalf if you are not a current UW member. If you are a current UW member you can enroll your device yourself
- You must enroll a compatible two factor device in the UW Duo system.
- You must add the Hyak server and the Lolo server (storage system) to your UW NetID self-services pages. To do this, click here, click “Computing Services,” check the the “Hyak Server” and “Lolo Server” boxes in the “Inactive Services” section, click “Subscribe” at the bottom of the page, and click “Finish.” After subscribing, it may take up to an hour to be fully provisioned.
Connecting to Hyak
You’ll need to SSH into Hyak using your UW NetID username and password. It will ask you to approve using DUO 2-factor. Depending on which cluster you want, use the hostname:
ikt.hyak.uw.edu RETIRED June 2020
Mox.hyak.uw.edu 2018 version of cluster
Klone.hyak.uw.edu 2021 version of cluster
For example, ssh UWNetID@mox.hyak.uw.edu, substituting your user name in place of “UWNetID.” Please use the /gscratch/csde area and create a subdirectory there named with your UW NetID. The lolo collaboration file system is located at /lolo/collaboration/hyak/csde/.
Please subscribe to this list for Hyak status updates.
The basic gist of the Hyak cluster is this: you will SSH into the head node of the Hyak system, where you can do minor work or ask the system for an “interactive node” you can ssh directly to and work away. The “intended” way to use the cluster is to make a batch submit script and submit your job to the scheduler. Once you set up your SSH key relationship, you won’t need to use your DUO 2FA login. In a standard Hyak node on the batch system, all software is a “module,” so you’ll have to load the “R” or “Microsoft R open” (Formerly RevolutionR) module. Take a look at “Software Development Tools” here.
Additional information is available below:
Which Hyak Queue should I use?
Have a lot of parallel workloads to run? Using the backfill queue offers the working potential of thousands of CPU cores.
NOTE: Each job in the backfill queue can only run for ~2 hours before being shut down, so divide your jobs up accordingly and/or use “checkpointing”!
As a member of the CSDE hyak allocation, you have access the few CSDE nodes that we own (currently 4 MOX as of 11/2019) as well as the backfill queue.
CSDE strongly advises you to develop and use BACKFILL QUEUE whenever possible.
Run your job on any of 3 queues using the following syntax:
- STF queue: (UW students only)* qsub -W group_list=hyak-stf runsim.sh
- CSDE queue: qsub -W group_list=hyak-stf runsim.sh
- Backfill queue: qsub -W group_list=bf runsim.sh
*If you are a currently enrolled UW student paying the Student Technology Fee, you should join the UW HPC Club. This will allow you to submit jobs to be run in the student node allocation
Hyak node purchases
For node options, rates, and other details please see UW-IT Service catalog page for Hyak
Note that the Arts and Sciences Dean’s office pays for the “Hotel Slots” that CSDE Hyak blades occupy, and so needs the Deans’ office approval in order to purchase a node.
Please contact CSDE Help for assistance with this process.
Current Node pricing as of 05/10/2021
Nodes are still 40 cores and the different memory configurations are listed below.
192GB nodes at $4,708.94 per node
386GB nodes at $5,513.57 per node
768GB nodes at $7,191.47 per node
Hyak Node Retirement
Blades are deployed for a minimum of three years. Blade deployments may only be extended if there is no demand for the slots they occupy. Because Hyak currently has lots of unoccupied slots, this has meant that nodes have continued to run beyond their 3-year minimum lifespan. As long as the nodes continue to operate, they have remained in the original owner’s queues. The IKT cluster has been running almost 10 years and will be retired/turned off in June 2020.
Click here for Hyak utilization data and here for the inventory.
Citation in Publications
Please remember to acknowledge Hyak in any media featuring results that Hyak helped generate. When citing Hyak, please use the following language:
“This work was facilitated through the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington.”
When you cite Hyak, please let us know by emailing firstname.lastname@example.org with “Hyak” as the first word in the subject along with a citation we can use in the body of the message. Likewise, please let us know of successful funding proposals and research collaborations to which Hyak contributed.
Hyak is a CSDE resource, so remember to cite CSDE as well! Click here for more information on acknowledging support from CSDE.