salloc is used to allocate resources for a job in real time as an interactive batch job. Typically this is used to allocate resources and spawn a shell. The shell is then used to execute srun commands to launch parallel tasks.
srun
srun is used to submit a job for execution or initiate job steps in real time. A job can contain multiple job steps executing sequentially or in parallel on independent or shared resources within the job's node allocation. This command is typically executed within a script which is submitted with sbatch or from an interactive prompt on a compute node obtained via salloc.
How to write sbatch? Link
How to request for interactive node? Link
Sample scripts (Bharat)
Public Webpage
path : cd /project/projectdirs/m2467/www/bharat/
webpage : https://portal.nersc.gov/project/m2467/
If you are not able to see your files just run the following command on the terminal: chmod 755 *
To avoid HDF errors, type the following on the terminal of interactive node: export HDF5_USE_FILE_LOCKING=FALSE
SCRATCH
$SCRATCH or cd /global/cscratch1/sd/bharat
CMIP6 Data
/global/cfs/cdirs/m3522/cmip6/CMIP6
Project Community File System (CFS), use this to share data with other members
/global/cfs/cdirs/m2467
Backups
Snapshots
Global homes and Community use a snapshot capability to provide users a seven-day history of their directories. Every directory and sub-directory in global homes contains a ".snapshots" entry.
.snapshots is invisible to ls, ls -a, find and similar commands
Contents are visible through ls -F .snapshots
Can be browsed normally after cd .snapshots
Files cannot be created, deleted or edited in snapshots
Files can only be copied out of a snapshot
JupyterHub
JupyterHub provides a multi-user hub for spawning, managing, and proxying multiple instances of single-user Jupyter notebook servers. At NERSC, you authenticate to the JupyterHub instance we manage using your NERSC credentials and one-time password. Here is a link to NERSC's JupyterHub service: https://jupyter.nersc.gov/