NOTE: This post is out of date! Penn State has switched to a new cluster system.
This is for my own reference, because if I go more than a few days without doing these steps I will probably forget them.
- If there are files on your local computer that you need to use on the cluster, transfer them over with this command.
scp path/to/file/file1.c other/path/file2.c email@example.com:~/work
To copy an entire directory over:
scp -r path/to/directory firstname.lastname@example.org:~/work
- The Lion-GA cluster is the one with the Nvidia graphics cards. SSH into it with -Y so that you can launch a text editor with a GUI. I refuse to learn vim or emacs.
ssh -Y email@example.com
- Once you’re in Lion-GA, to see all available GPU nodes, do
pbsnodes. There are 8 total as of right now. It seems like there are always a few that are offline. To see which are currently offline, do
- SSH into one of the nodes that are online.
- To see a list of the 8 graphics cards and any processes that are currently running on them, do
- Once you are SSH’d into the GPU node, you can compile CUDA .cu files using
nvccand run the output, just as you would on your local machine.
- To edit a file, logout of the GPU node and then run