Parallel WRF (Weather Research and Forecasting) with Intel compilers on a 64-bit Linux System
The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.
So speaketh the about page. Now, let's get installed and tested. If you are planning on running Real Cases, you would need: WPS + WRF ARW Model + PostProcessing. In the tradition of nested acronyms, WPS is the "WRF Preprocessing System".
Register as a new user and download from:
http://www.mmm.ucar.edu/wrf/users/download/wrf-regist.php
This documentation only illustrates the installation of the parallel version because WRF deals with issues of sufficient complexity that makes serial installs not particularly useful. Further, it only discusses parallel installations using Intel compilers, because that's the only version successfully installed.
(This is a highly abbreviated version that does not detail the blood, sweat and tears that were spilt in installing this under different compilers etc)
cd /usr/local/src/WRF
wget http://www.mmm.ucar.edu/wrf/src/WRFV3.2.1.TAR.gz
wget http://www.mmm.ucar.edu/wrf/src/WPSV3.2.1.TAR.gz
tar xvf WRFV3.2.1.TAR.gz
tar xvf WPSV3.2.1.TAR.gz
mkdir WRF3.2.1-parallel
mkdir WPS3.2.1-parallel
mv WPS WPS3.2.1-parallel
mv WRF WRF3.2.1-parallel
Building WPS requires that WRFV3 is already built. So start with WRF. You can more or less follow what is written in the user guide. Begin with loading the the following environment modules, or (if you're insane) set them up by hand.
module load netcdf/4.0-intel
module load perl/5.10.1
module load pgi/10.9
module load openmpi-intel/1.5.1
export NETCDF=/usr/local/netcdf/4.0-intel
export NETCDF=$NETCDF4_DIR
Change into the WRF directory and configure.
cd wrf3.2.1-parallel
./configure
In our case we want to select option 11 which Linux x86_64 i486 i586 i686, ifort compiler with icc (dmpar), 64-bit Linux with i686 processors, with Intel Fortran and Intel C compilers and distributed memory.
Check the configuration file to ensure that paths have been modified correctly. In particular you may need to edit the configure file and change /usr/local/netcdf to /usr/local/netcdf/4.0-intel
on line 149 and:
LIB_EXTERNAL = \
$(WRF_SRC_ROOT_DIR)/external/io_netcdf/libwrfio_nf.a -L/usr/local/netcdf/lib -lnetcdf
to
LIB_EXTERNAL = \
$(WRF_SRC_ROOT_DIR)/external/io_netcdf/libwrfio_nf.a -L/usr/local/netcdf/4.0-intel/lib -lnetcdf
Also, you will find extensive suggestions littered all over the 'net to change lines 99 and 100 as follows:
DM_FC = mpif90 # -f90=$(SFC)
DM_CC = mpicc # -cc=$(SCC)
Do not do this with an Intel compilation.
Compile em_real as follows. Send the output to compile.log so you can check it for errors. This will take "a while".
./compile em_real >& compile.log
Check for success with ls main/*.exe
. You should see the following:
main/ndown.exe // Used for one-way nesting
main/nup.exe // Upscaling - used with WRF-Var
main/real.exe // WRF initialization for real data cases
main/wrf.exe // WRF model integration
If you do not, check your lengthy compile.log file for where the error occurred. Once you've discovered it, run ./clean -a
and start again.
Now move into the WPS directory (cd ../WPS3.2.1-parallel
you created and check that the environment variables for NETCDF are still be set (env | grep -i NETCDF
). Run ./configure
and select option 3, PC Linux x86_64, Intel compiler, DM parallel (NO GRIB2). You will need to modify configure.wps
, especially check the location of WRF, and NetCDF. Then run the compile in a similar fashion as WRF with similar actions if the compile fails (./compile >& compile.log
). Again this will take "a while". If successful, it should create the following symbolic links with the following use.
geogrid.exe -> geogrid/src/geogrid.exe // Generate static data
metgrid.exe -> metgrid/src/metgrid.exe // Generate input data for WRFV2
ungrib.exe -> ungrib/src/ungrid.exe // Unpack GRIB data
You will also note the following utilities in the util/ directory:
avg_tsfc.exe //Computes daily mean surface temperature from intermediate files. Recommended for using with the 5-layer soil model (sf_surface_physics = 1) in WRF
g1print.exe //List the contents of a GRIB1 file
g2print.exe //List the contents of a GRIB2 file
mod_levs.exe //Remove superfluous levels from 3-d fields in intermediate files
plotfmt.exe //Plot intermediate files (dependent on NCAR Graphics - if you don't have these libraries, plotfmt.exe will not compile correctly)
rd_intermediate.exe //Read intermediate files
Now for some actual testing. In WPS geogrid.exe creates terrestrial data (static), then ungrib.exe unpacks GRIB meteorological data and pack it into an intermediate file format, then metgrid.exe interpolates the meteorological data horizontally onto your model domain and then you can use the output from metgrid.exe is used as input to WRF.
Who ever said that weather forecasting was easy?
Download the data for geogrid.exe. You'll want the "big" test data (geog_v3.1.tar.gz). If multiple users need to run WPS, you may want to place a single copy which everyone can access.
wget http://www.mmm.ucar.edu/wrf/src/wps_files/geog_v3.1.tar.gz
tar xvf geog_v3.1.tar.gz
Modify namelist.wps to ensure correct path to geo data (e.g., geog_data_path = 'geog'
).
Run ./geogrid.exe
. The output is in the format geo_em.dxx.nc
, i.e., one file for each domain). If all goes well, you'll eventually receive the happy completion message:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Successful completion of geogrid. !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Next, test UNGRIB. The purpose of UNGRIB is to unpack GRIB (GRIB1 and GRIB2) meteorological data and pack it into an intermediate file format. This requires several steps.
- Download data and place in a unique directory
- Familiarise yourself with the data
- Link (with the UNIX command
ln
) the correct Vtable - Link (with supplied script link_grib.csh) the input GRIB data
- Edit namelist.wps. You only need to pay attention to the following parameters:
start_date end_date interval_seconds prefix
- Run ungrib.exe (output will be intemediate files - one file for each time)
- Familiarise yourself with the intermediate files
Here's a non-abstract example.
1. Download data and place in directory JAN00
wget http://www.mmm.ucar.edu/wrf_tmp/WRF_OnLineTutorial/SOURCE_DATA/JAN00.tar.gz
tar xvf JAN00.tar.gz
2. Examine the GRIB files, e.g., ./util/g1print.exe JAN00/2000012412.AWIPSF
3. ln -sf ungrib/Variable_Tables/Vtable.AWIP Vtable
4. ./link_grib.csh JAN00/2000012
5 Edit namelist.wps
add the following;
&share
start_date = '2000-01-24_12:00:00',
end_date = '2000-01-25_12:00:00',
&ungrib
out_format = 'WPS'
prefix = 'FILE',
/
Note the AWF instructions are unclear about this editing. Thank you to 'qiugq06' of Meteorological Numerical Model Union of China (MNMUC).
6. ./ungrib.exe >& ungrib_data.log
. Check the log file for the concluding message ! Successful completion of ungrib. !
7. Examine the intermediate files, e.g., ./util/rd_intermediate.exe FILE:2000-01-24_12
. You should see something like:
SUCCESSFUL COMPLETION OF PROGRAM RD_INTERMEDIATE
FORTRAN STOP
Finally, for METGRID. The steps for this are:
1. Edit namelist.wps
2. Input to METGRID is the geo_em.dxx.nc output files from GEOGRID, and the intermediate output files from UNGRIB.
3. Run metgrid.exe
Output from this run will be:
- met_em.d01.YYYY-MM-DD_DD:00:00.nc - one file for per time, and
- met_em.dxx.YYYY-MM-DD_DD:00:00.nc - one file for per nest, for the initial time only (met_em files for other times can be created, but are only needed for special FDDA runs).
When edit namelist.wps, keep the &share dates from ungrib. The documentation says otherwise; don't follow it, follow me :
The entire file should look like:
&share
wrf_core = 'ARW',
max_dom = 1,
start_date = '2000-01-24_12:00:00',
end_date = '2000-01-25_12:00:00',
interval_seconds = 21600,
io_form_geogrid = 2,
/
&geogrid
parent_id = 1, 1,
parent_grid_ratio = 1, 3,
i_parent_start = 1, 31,
j_parent_start = 1, 17,
e_we = 74, 112,
e_sn = 61, 97,
geog_data_res = '10m','2m',
dx = 30000,
dy = 30000,
map_proj = 'lambert',
ref_lat = 34.83,
ref_lon = -81.03,
truelat1 = 30.0,
truelat2 = 60.0,
stand_lon = -98.0,
geog_data_path = 'geog'
/
&ungrib
out_format = 'WPS'
prefix = 'FILE',
/
&metgrid
fg_name = 'FILE'
io_form_metgrid = 2,
/
Run the program and look in the log for the successful completion notice.
./metgrid.exe >& metgrid_data.log
! Successful completion of metgrid. !
Check it with ncdump -h met_em.d01.2000-01-24_12:00:00.nc
.
Congratulations, WPS is complete. Now for WRF. Change to the em_real
and symbolically link the met_em* files and then run real.exe
cd /usr/local/src/WRF/WRF3.2.1-parallel/test/em_real
ln -sf ../../../WPS3.2.1-parallel/met_em* .
./real.exe
Check the rsl error and output files.
cat rsl.out.0000
d01 2000-01-25_12:00:00 real_em: SUCCESS COMPLETE REAL_EM INIT
FORTRAN STOP
cat rsl.error.0000
d01 2000-01-25_12:00:00 Timing for processing 0 s.
d01 2000-01-25_12:00:00 Timing for output 0 s.
d01 2000-01-25_12:00:00 Timing for loop # 5 = 0 s.
d01 2000-01-25_12:00:00 real_em: SUCCESS COMPLETE REAL_EM INIT
It's done! It works! Now to create something for the users and test it with qsub.
Because there is no executables which are independent of the source, regrettably copying the WPS and WRF folders to the the usual install directory is acceptable in the shortrun. There will be a better way to do this.
mkdir -p /usr/local/wrf/3.2.1-parallel
mkdir /usr/local/wrf/3.2.1-parallel/WRF
mkdir /usr/local/wrf/3.2.1-parallel/WPS
cd /usr/local/src/WRF
cp -r wps3.2.1-parallel /usr/local/wrf/3.2.1-parallel/WRF
cp -r wps3.2.1-parallel /usr/local/wrf/3.2.1-parallel/WPS
Make an evironment module file for WRF
cd /usr/local/Modules/modulefiles
mkdir wrf
ln -s .base 3.2.1
The .base file will consist of the following:
#%Module1.0#####################################################################
##
## $name modulefile
##
set ver [lrange [split [ module-info name ] / ] 1 1 ]
set name [lrange [split [ module-info name ] / ] 0 0 ]
set loading [module-info mode load]
set desc [join [read [ open "/usr/local/Modules/modulefiles/$name/.desc" ] ] ]
proc ModulesHelp { } {
puts stderr "\tThis module sets the envinronment for $name v$ver"
}
if { $loading && ![ is-loaded openmpi-pgi/1.5.1 ] } {
module load openmpi-pgi/1.5.1
}
if { $loading && ![ is-loaded netcdf/4.0-intel ] } {
module load netcdf/4.0-intel
}
if { $loading && ![ is-loaded netcdf/4.0-intel ] } {
module load netcdf/4.0-intel
}
module-whatis "$desc (v$ver)"
prepend-path PATH /usr/local/$name/$ver/WRF/test/em_real
prepend-path PATH /usr/local/$name/$ver/WPS
prepend-path PATH /usr/local/$name/$ver/WRF
prepend-path PATH /usr/local/$name/$ver/WRF
prepend-path PATH /usr/local/$name/$ver/WPS
setenv NETCDF /usr/local/netcdf/4.0-intel
Just for testing purposes, copy the met files and the namelist.input to /usr/local/wrf/3.2.1/test ..
cp /usr/local/src/WRF/WRF3.2.1-parallel-intel/run/namelist.input .
cp /usr/local/src/WRF/WRF3.2.1-parallel-intel/run/met_em.d01.2000-01-24_12:00:00.nc .
cp /usr/local/src/WRF/WRF3.2.1-parallel-intel/run/met_em.d01.2000-01-24_18:00:00.nc .
cp /usr/local/src/WRF/WRF3.2.1-parallel-intel/run/met_em.d01.2000-01-25_00:00:00.nc .
cp /usr/local/src/WRF/WRF3.2.1-parallel-intel/run/met_em.d01.2000-01-25_06:00:00.nc .
cp /usr/local/src/WRF/WRF3.2.1-parallel-intel/run/met_em.d01.2000-01-25_12:00:00.nc .
Assuming a directory with a namelist.input and the appropriate met_em* files the following pbs script will produce, in parallel, an rsl.out file with expected results.
#!/bin/bash
#PBS -q sque
#PBS -N wrf01
#PBS -l walltime=00:10:00
# For Parallel Jobs: ie. To reserve 8 nodes with 1 processors each, ie 8cpus.
#PBS -l nodes=1:ppn=2
module load wrf
# Changes directory to your execution directory (Leave as is)
cd $PBS_O_WORKDIR
mpiexec real.exe
See this for other installs:
http://software.intel.com/en-us/articles/wrf-installation-bkm-for-linux-...