<<

I recommend that you with the exact same structure for running the models that I use. You can customize things as you begin to understand how it all works. This is all set up for work on the union square cluster. Everything is about the same on bowery, but I have not yet compiled all the cores.

0) An overview of the file system: where to put code, run the model, and store the output.

Where ever you see “nyuID” that means your nyuID, i.e. epg2 for me.

/home/nyuID This is a place for codes, scripts, etc., but not data. This is backed up, but small. We put the model code here!

/scratch/nyuID (or) /scratch.tmp/nyuID This is a place to run the models and analyze output, but is not backed up.

/archive/nyuID This is a place to store data - it’s backed up - but it cannot be accessed from the compute nodes.

1) Get the codes, scripts, etc. Here we the key files from .

Start in your home directory: >

Create a place to put all the climate model related stuff: > models

Get the code from my home directory. I’ve already compiled it and produced executables. > mkdir models/code/ > -r /home/epg2/models/code/memphis models/code/ Get the scripts to run the model: > mkdir models/jobs > cp /home/epg2/models/jobs/template.iterate.nyu models/jobs/ > cp /home/epg2/models/jobs/create_runscript.csh models/jobs/ > cp /home/epg2/models/jobs/am2_runscript_nyu models/jobs/

Get the parameters for four basic runs (three dynamical cores + AM2): > mkdir models/run_parameters > cp -r /home/epg2/models/run_parameters/bgrid_n45l20e_hs models run_parameters/ > cp -r /home/epg2/models/run_parameters/n45l24_hs models/ run_parameters/ > cp -r /home/epg2/models/run_parameters/t42l20e_hs models/ run_parameters/ > cp -r /home/epg2/models/run_parameters/am2_default models/ run_parameters/

Get some scripts to interpret the output. These only work with the dynamical cores. They may not be so useful to you with out further instruction. They allow you to and rename files, and then to preprocess the results a bit, making it easier to work with in . > mkdir models/scripts > cp /home/epg2/models/scripts/transfer-data.sh models/scripts > cp /home/epg2/models/scripts/interpolate-concatenate.sh models/scripts > cp /home/epg2/models/scripts/interpolate-zonal-average- concatenate.sh models/scripts

Lastly get some basic scripts to use matlab: > mkdir matlab > cp /home/epg2/matlab/ncget.m matlab 2) Set up your environment on the hpc.

You’ll need to set up a few things to compile and run the core on the hpc. I have it load the necessary paths by default by putting them in my start up file. I use the “” and think it is the default. If you’ve already customized your .bashrc file, then you’ll just want to copy the module load commands from my sample.bashrc. Otherwise, start with the sample .bashrc file, and then activate the changes by sourcing it:

> cp /home/epg2/epg_library/sample.bashrc .bashrc > source .bashrc

These commands set up your paths so that the system knows that you’ll be using fortran, parallel processing, matlab, etc.. NOTE that the last command: u=rwx,g=rx,o=rx makes it so that files that you create can be viewed by other people on the hpc. In general you should not consider anything on a public system like this to be private, but this does allow anyone to look your files. This can be useful when you want to share data or want me to help you ; without it, I can’t read any of your files! That said, if this option makes you feel uncomfortable, please delete this line!!!

I’ve also included some useful aliases in the sample.bashrc file. I’ll refer to them later. (An is a short for typing long commands.)

You may need to edit your model runscript template file. If you your work directory is: /scratch.tmp/nyuID instead of /scratch/nyuID you must edit the file

/home/nyuID/models/jobs/template.iterate.nyu Each you see “scratch” in this file, it with “scratch.tmp”. There should be a total of three you need to change.

From now on, anytime I say “scratch” and you have the “scratch.tmp” directory, always scratch.tmp instead of scratch.

Next, create space to run your model, remembering to use your nyuID. All the output will be put here when you run the model. Also create a place to store the output text files when the model runs. Normally you don’t need to look at these output files, but they can be useful when the model crashes. mkdir /scratch/nyuID/model_output mkdir /scratch/nyuID/job.output

It wouldn’t hurt to create a place to put the model output on the archive, too. You can move data here after running the model. mkdir /archive/nyuID/model_output

Okay, now you should be ready to start running the code. Go to the next file, “running_the_model_interactively.pdf”