Please login through the login node first, reserve an interactive session in Slurm, then should be able to ssh to the GPU node.

Login to node using ssh netid@dkucc-login-01.oit.duke.edu

Then run sinfo to get a list of partition and nodelist names, as below.

$ sinfo

PARTITION         AVAIL         TIMELIMIT     NODES       STATE           NODELIST

common*            up             90-00:00:0          1             mix             dkucc-core-01

common*            up             90-00:00:0          7             idle             dkucc-core-[02-08]

common-gpu      up             7-00:00:00          1             alloc           dkucc-core-gpu-01

common-gpu      up             7-00:00:00          1             idle             dkucc-core-gpu-02

The interactive session for a GPU node can be done on the DKUCC with the command:

				
					srun -p common-gpu --gres=gpu:1 --exclusive --pty bash -i
				
			
Table of Contents