# 3.4. Running JULES¶

The user interface of JULES consists of several files with the extension .nml containing Fortran namelists. These files and the namelist members are documented in more detail in The JULES namelist files. These namelists are grouped together in a single directory. That directory is referred to as the namelist directory for a JULES run.

Once a JULES executable is compiled and the namelists are set up, JULES can be run in one of two ways:

1. Run the JULES executable in the namelist directory with no arguments:

cd /path/to/namelist/dir
/path/to/jules.exe

2. Run the JULES executable with the namelist directory as an argument:

/path/to/jules.exe  /path/to/namelist/dir


Warning

Any relative paths given to JULES via the namelists (e.g. file in JULES_FRAC) will be interpreted relative to the current working directory.

This means that if the user plans to use the second method to run JULES (e.g. in a batch environment), it is advisable to use fully-qualified path names for all files specified in the namelists.

To allow runs to be portable across different machines, it is common to specify data files relative to the namelist directory (e.g. in the point_loobos_* examples supplied with JULES). In this case, JULES must be run using the first method to allow the relative paths to be resolved correctly.

## 3.4.1. Running the Loobos example from a fresh download of JULES¶

1. Move into the JULES root directory (the directory containing includes, src etc.):

$cd /jules/root/dir  2. Build JULES: $ fcm make -f etc/fcm-make/make.cfg

3. Move into the example directory:

$cd examples/point_loobos/  4. Run the JULES executable: $ ../../build/bin/jules.exe


## 3.4.2. Running JULES with OpenMP¶

If JULES is compiled with OpenMP, then it must be told how many OpenMP threads to use. This is done using the environment variable OMP_NUM_THREADS:

$export OMP_NUM_THREADS=4 # Use 4 threads for OpenMP parallel regions$ /path/to/jules.exe


## 3.4.3. Running JULES with MPI¶

When running JULES using MPI, JULES attempts to find a suitable decomposition of the grid depending on how many MPI tasks are made available to it. Each MPI task can then be thought of as its own independent version of JULES, with each task being responsible for a portion of the grid. Each task reads its portion of the input file(s), performs calculations on those points and outputs its portion of the output file(s). Tasks only communicate in order to read and write dump files - this ensures that dump files are consistent regardless of decomposition, i.e. a dump from any run (MPI or not; different numbers of MPI tasks), can be used to (re-)start any other run and produce identical results, providing the overall model grids are the same.

None of the namelists or namelist members are parallel-specific - the same JULES namelists can be used to run JULES with or without MPI, and the final results will be identical.

If JULES is compiled with MPI, then it must be run using commands from your MPI distribution (usually called mpiexec and/or mpirun):

\$ mpirun -n 4 /path/to/jules.exe  # Run JULES using 4 MPI tasks


Detailed discussion of mpiexec/mpirun is beyond the scope of this document - please refer to the documentation for your chosen MPI distribution for the available options and features.