Table of Contents

Export namespace "wiki:" to file pdfdump.pdf


Cluster Computing

Welcome to the CfN wiki for cluster computing resources.

CfN runs a high-performance computing cluster, consisting of 576 Intel Xeon compute cores with a maximum 62GB RAM per job (possibly extensible), a 10Gb high-speed internal network, over 200TB of RAID-6 high-speed storage, and a tape backup system.

The administrator is Michael Stauffer admin@cfn.med.upenn.edu

Please let us know of any issues with this wiki.

Searching the wiki

You can search the entire wiki (e.g. for an error message you got) in the search box in the upper-right of the window.

TIP only search for part of an error message, i.e. parts that seem unique to the error and don't include things like compute node names or filenames.

TIP Use quotes to search for a complete phrase, e.g. “memory allocation”

TOPICS

Cluster Specifications

Overview for Cluster for Grant Proposals

Getting Help

Click here

Intro Topics

Accounts and VPN (Getting Started)
Logging In, Linux Basics, X11 & GUI's
Cluster Basics (Overview)

Running Jobs

Using SGE (Running Jobs & Gettin' Stuff Done)

Data

File Transfer & Remote File Access
Backup / Archving / Cold Storage (Alternate ways to store data)
Sharing Data - Read/Write access for other users
HCP Data Set
Scanner Gradient Coefficient Files

Other Topics

XNAT
x2go Remote Desktop (Alternative to X11 for slower connections)
PACS (Clinical Radiology Image Database)
Building/Compiling Software

Troubleshooting / FAQ

Common Issues & Errors Messages
Asking the Admins for Help
What To Do When the Cluster is Very Busy

Application Details & Tips

Matlab
FSL
SPM
R & ANTsR
MRtrix
gcc/g++
Python
Java

Billing

Cluster Billing - Methods and Fees
Billing Reports
Slot Usage Reports (CPU & Memory Usage)

See Also

PICSL Cluster - Usage Guide (For a different view, BUT the details here are different than for CfN).

PICSL USERS

Welcome PICSL Cluster Users - please see this Transition Guide to get started using the CfN cluster, AND look through the other pages below.