Filesystems

The various filesystems available on the Wilson Cluster are listed in the table below

AreaDescription
/nashomeUser home area.
Quota limit per user = 5 GB. 
A valid Kerberos ticket is required to access this filesystem. You must allow ssh to delegate (forward) your Kerberos ticket when you login to the cluster. Do not launch batch jobs from /nashome as they will silently fail due to not having a valid Kerberos ticket! Likewise, your batch job should not depend on access to /nashome in any way.
By default, every user with a valid account on WC-IC has a subdirectory under this area.
Backup: YES
/work1Project data area. Permissions and quota managed by GID/group.
Default quota limit per project = 25GB.
Projects may request additional space using the Project and User Requests forms.
Backup: NO
/cvmfsCernVM File System.
Most experiments with existing access to CVMFS can now access this area on WC-IC submit host and worker nodes.
/wclustreLustre storage
Default quota limit per group = 100GB.
Visible on all cluster worker nodes.
Ideal for temporary storage (~month) of very large data files. NOT suitable for a large number of small files. Disk space usage monitored and disk quotas enforced.  One of the main factors leading to the high performance of Lustreā„¢ file systems is the ability to stripe data over multiple OSTs. The stripe count can be set on a file system, directory, or file level. See the Lustre Striping Guide for more information.
Backup: NO
/scratchLocal scratch disk on each worker node. 
289GB in size.
Suitable for writing all sorts of data created in a batch job. Offers the best performance.
Any precious data needs to be copied off of this area before a job ends since the area is automatically purged at the end of a job.
Backup: NO
/pnfsEnstore Tape storage.
Visible on cluster submit host only.
Ideal for permanent storage of parameter files and results.
Must use special copy command: ‘dccp’
GlobusGlobus End-Point named fnal#wilson which you can use to transfer data in and out of WC-IC. Please refer to the documentation on globus.org on setting up a Globus Connect Personal if needed.

Other Documentation