next up previous
Next: Testing the system Up: Compiling and running unstructured Previous: Code compilation

Running the model

The main difference between running commonly used SWAN in parallel and running the unstructured mesh version of SWAN in parallel on a multi-core cluster is the need to explicitly decompose the unstructured mesh and input files, e.g. fort.14 and fort.15, into smaller pieces, so that each piece can run on its own core on the cluster. The program adcprep is used to perform the decomposition. The actual mesh partitioning is done by the well-known METIS package (http://glaros.dtc.umn.edu/gkhome/views/metis).


In order to break up the original input files (which describe the full domain) into smaller pieces (called subdomains), go to the directory where the input files are located and execute adcprep. You need to specify the number of cores that you plan to use for the simulation. The program will decompose your input files into that number of subdomains and copy each of them into the corresponding local PE sub-directories. Finally, it will copy the SWAN command file into the local PE sub-directories. If you decide later that you want to run on more (or fewer) cores, you must perform this step again.


So, more specifically, run adcprep and indicate the number of processors. First, you need to do mesh partition using METIS by selecting

1. partmesh
Next, rerun adcprep and continue with full pre-processing by selecting
2. prepall
or
3. prepspec
and specify the necessary ADCIRC files (fort.14 and fort.15; you may skip the file fort.19). Be sure that the initialisation file swaninit has been created before running adcprep. You may adapt this initialisation file by renaming INPUT to your own SWAN command file. Otherwise, you may copy your command file to INPUT.


After the mesh and input files have been prepared with adcprep, the actual SWAN calculation is performed using swan.exe. This program performs the calculation on each subdomain and uses MPI to provide ongoing communication between the subdomains. The command required for actually starting the SWAN job is highly dependent on the cluster software and queueing system. Either way, use the new executable to run this model in much the same way you would run the parallel SWAN; see Chapter 5.


After the simulation is finished the output data will be automatically merged. With respect to the generation of hotfiles, it is usually done in different PE sub-directories. So, in principle, in case of restarting, the same number of processors must be employed. Instead, a single hotfile can be created from the existing local PE hotfiles using the program unhcat.exe, as available from SWAN version 40.91. This executable is generated from the Fortran program HottifySWAN.ftn90. To concatenate the multiple hotfiles into a single hotfile (globalization) just execute unhcat.exe. With this program, it is also possible to create a set of local PE hotfiles from an existing global hotfile (localization) over a different number of processors.


next up previous
Next: Testing the system Up: Compiling and running unstructured Previous: Code compilation
The SWAN team 2017-07-31