From Fedora Project Wiki
(→‎Packaging of MPI compilers: point to Packaging/EnvironmentModules)
 
(56 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{admon/warning|This is a draft document}}
== Introduction ==
== Introduction ==


Line 6: Line 4:


There are many MPI implementations available, such as [http://www.lam-mpi.org/ LAM-MPI] (in Fedora, obsoleted by Open MPI), [http://www.open-mpi.org/ Open MPI] (the default MPI compiler in Fedora and the MPI compiler used in RHEL), [http://www.mcs.anl.gov/research/projects/mpi/mpich1/ MPICH] (Not yet in Fedora), [http://www.mcs.anl.gov/research/projects/mpich2/ MPICH2] (in Fedora) and
There are many MPI implementations available, such as [http://www.lam-mpi.org/ LAM-MPI] (in Fedora, obsoleted by Open MPI), [http://www.open-mpi.org/ Open MPI] (the default MPI compiler in Fedora and the MPI compiler used in RHEL), [http://www.mcs.anl.gov/research/projects/mpi/mpich1/ MPICH] (Not yet in Fedora), [http://www.mcs.anl.gov/research/projects/mpich2/ MPICH2] (in Fedora) and
[http://mvapich.cse.ohio-state.edu/ MVAPICH1 and MVAPICH2] (Not yet in Fedora).
[http://mvapich.cse.ohio-state.edu/ MVAPICH1 and MVAPICH2] (are in RHEL but not yet in Fedora).


As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files.
As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files.


== Packaging of MPI compilers ==
== Packaging of MPI compilers ==
Line 19: Line 14:
! File type !! Placement
! File type !! Placement
|-
|-
|Binaries||<code>%{_bindir}/%{name}-%{_arch}%{?_cc_name_suffix}/</code>
|Binaries||<code>%{_libdir}/%{name}%{?_cc_name_suffix}/bin</code>
|-
|Libraries||<code>%{_libdir}/%{name}%{?_cc_name_suffix}/lib</code>
|-
|[[PackagingDrafts/Fortran|Fortran modules]]||<code>%{_fmoddir}/%{name}%{?_cc_name_suffix}/</code>
|-
|-
|Libraries||<code>%{_libdir}/%{name}%{?_cc_name_suffix}/</code>
|Architecture specific [[Packaging/Python|Python modules]]||<code>%{python_sitearch}/%{name}%{?_cc_name_suffix}/</code>
|-
|-
|Config files||<code>%{_sysconfdir}/%{name}-%{_arch}%{?_cc_name_suffix}/</code>
|Config files||<code>%{_sysconfdir}/%{name}-%{_arch}%{?_cc_name_suffix}/</code>
Line 29: Line 28:




As include files and manual pages are bound to overlap between different MPI implementations, they <b>MUST</b> also placed outside normal directories:
As include files and manual pages are bound to overlap between different MPI implementations, they <b>MUST</b> also placed outside normal directories. It is possible that some man pages or include files (either those of the MPI compiler itself or of some MPI software installed in the compiler's directory) are architecture specific (e.g. a definition on a 32-bit arch differs from that on a 64-bit arch), the directories that <b>MUST</b> be used are as follows:
 
{|
{|
|+Architecture independent file placement for MPI compilers
!File type !! Placement
!File type !! Placement
|-
|-
|Man pages||<code>%{_mandir}/%{name}%{?_cc_name_suffix}/</code>
|Man pages||<code>%{_mandir}/%{name}-%{_arch}%{?_cc_name_suffix}/</code>
|-
|-
|Include files||<code>%{_includedir}/%{name}%{?_cc_name_suffix}/</code>
|Include files||<code>%{_includedir}/%{name}-%{_arch}%{?_cc_name_suffix}/</code>
|}
|}
In the case the man pages or include files are architecture specific (they contain architecture specific stuff), the <code>-%{_arch}</code> suffix <b>MUST</b> be added to <code>%{name}</code> in the above.


Architecture and compiler (<code>%{?_cc_name_suffix}</code>) independent parts not placed in <code>-devel</code> <b>MUST</b> be placed in a <code>-common</code> subpackage that is <code>BuildArch: noarch</code> on >= Fedora 11.


Architecture and compiler (<code>%{?_cc_name_suffix}</code>) independent parts (except headers which go into <code>-devel</code>) <b>MUST</b> be placed in a <code>-common</code> subpackage that is <code>BuildArch: noarch</code> on >= Fedora 11.


The MPI compiler's spec file <b>MUST</b> support the use of the following variables
 
The MPI compiler's spec file <b>MUST</b> support the use of the following variables to compile with other compilers
<pre>
<pre>
# We only compile with gcc, but other people may want other compilers.
# We only compile with gcc, but other people may want other compilers.
Line 67: Line 64:
The runtime of MPI compilers (mpirun, the libraries, the manuals etc) <b>MUST</b> be packaged into %{name}, and the development headers and libraries into %{name}-devel.
The runtime of MPI compilers (mpirun, the libraries, the manuals etc) <b>MUST</b> be packaged into %{name}, and the development headers and libraries into %{name}-devel.


As the compiler is installed outside <code>PATH</code>, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using [[PackagingDrafts/EnvironmentModules|environment modules]].
As the compiler is installed outside <code>PATH</code>, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using [[Packaging/EnvironmentModules|environment modules]].
 
The module file <b>MUST</b> be installed under <code>%{_sysconfdir}/modulefiles/mpi/</code>.  This allows as user with only one mpi implementation installed to load the module with:
 
<pre>
module load mpi
</pre>
 
The module file <b>MUST</b> have the line:
 
<pre>
conflict mpi
</pre>
 
to prevent concurrent loading of multiple mpi modules.


The module file <b>MUST</b> prepend the MPI bindir <code>%{_libdir}/%{name}/%{version}-<compiler>/bin</code> into the users PATH and set LD_LIBRARY_PATH to <code>%{_libdir}/%{name}/%{version}-<compiler>/lib</code>. The module file <b>MUST</b> also set some helper variables (primarily for use in spec files):
 
The module file <b>MUST</b> prepend the MPI bindir <code>%{_bindir}/%{name}-%{_arch}%{?_opt_cc_suffix}</code> into the users PATH, set LD_LIBRARY_PATH to <code>%{_libdir}/%{name}%{?_opt_cc_suffix}/</code> and PYTHONPATH to <code>%{python_sitearch}/%{name}%{?_cc_name_suffix}/</code>. The module file <b>MUST</b> also set some helper variables (primarily for use in spec files):
{|
{|
! Variable !! Value
! Variable !! Value !! Explanation
|-
|-
|<code>MPI_BIN</code>||<code>%{_bindir}/%{name}-%{_arch}%{?_opt_cc_suffix}/</code>
|<code>MPI_BIN</code>||<code>%{_libdir}/%{name}%{?_opt_cc_suffix}/bin</code>||Binaries compiled against the MPI stack
|-
|-
|<code>MPI_CONFIG</code>||<code>%{_sysconfdir}/%{name}-%{_arch}%{?opt_cc_suffix}/</code>
|<code>MPI_SYSCONFIG</code>||<code>%{_sysconfdir}/%{name}-%{_arch}%{?opt_cc_suffix}/</code>||MPI stack specific configuration files
|-
|-
|<code>MPI_INCLUDE</code>||<code>%{_includedir}/%{name}%{?_opt_cc_suffix}/</code>
|<code>MPI_FORTRAN_MOD_DIR</code>||<code>%{_fmoddir}/%{name}%{?_opt_cc_suffix}/</code>||MPI stack specific Fortran module directory
|-
|-
|<code>MPI_LIB</code>||<code>%{_libdir}/%{name}%{?_opt_cc_suffix}/</code>
|<code>MPI_INCLUDE</code>||<code>%{_includedir}/%{name}-%{_arch}%{?_opt_cc_suffix}/</code>||MPI stack specific headers
|-
|-
|<code>MPI_MAN</code>||<code>%{_mandir}/%{name}%{?_opt_cc_suffix}/</code>
|<code>MPI_LIB</code>||<code>%{_libdir}/%{name}%{?_opt_cc_suffix}/lib</code>||Libraries compiled against the MPI stack
|-
|-
|<code>MPI_COMPILER</code>||<code>%{name}%{?_cc_name_suffix}</code>
|<code>MPI_MAN</code>||<code>%{_mandir}/%{name}-%{_arch}%{?_opt_cc_suffix}/</code>||MPI stack specific man pages
|-
|-
|<code>MPI_SUFFIX</code>||The suffix used for programs compiled against <code>%{name}</code>: <code>%{?_cc_name_suffix}_mpi</code> for OpenMPI and <code>%{?_cc_name_suffix}_%{name}</code> for other compilers.
|<code>MPI_PYTHON_SITEARCH</code>||<code>%{python_sitearch}/%{name}%{?_cc_name_suffix}/</code>||MPI stack specific Python modules
|-
|<code>MPI_COMPILER</code>||<code>%{name}-%{_arch}%{?_cc_name_suffix}</code>||Name of compiler package, for use in e.g. spec files
|-
|<code>MPI_SUFFIX</code>||<code>%{?_cc_name_suffix}_%{name}</code>||The suffix used for programs compiled against the MPI stack
|}
|}
As these directories may be used by software using the MPI stack, the MPI runtime package <b>MUST</b> own all of them.


<b>MUST:</b> By default, no files are placed in <code>/etc/ld.so.conf.d</code>. If the packager wishes to provide alternatives support, it <b>MUST</b> be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.
<b>MUST:</b> By default, <b>NO</b> files are placed in <code>/etc/ld.so.conf.d</code>. If the packager wishes to provide alternatives support, it <b>MUST</b> be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.
 
<b>MUST:</b> If the maintainer wishes for the environment module to load automatically by use of a scriptlet in /etc/profile.d or by some other mechanism, this <b>MUST</b> be done in a subpackage.
 
<b>MUST:</b> The MPI compiler package <b>MUST</b> provide an RPM macro that makes loading and unloading the support easy in spec files, e.g. by placing the following in <code>/etc/rpm/macros.openmpi</code>


The MPI compiler package <b>MUST</b> provide an RPM macro that makes loading and unloading the support easy in spec files, e.g. by placing the following in <code>/etc/rpm/macros.openmpi</code>
<pre>
<pre>
%_openmpi_load \
%_openmpi_load \
  . /etc/profile.d/modules.sh; \
  . /etc/profile.d/modules.sh; \
  module load openmpi-%{_arch}; \
  module load mpi/openmpi-%{_arch}; \
  export CFLAGS="$CFLAGS %{optflags}";
  export CFLAGS="$CFLAGS %{optflags}";
%_openmpi_unload \
%_openmpi_unload \
  . /etc/profile.d/modules.sh; \
  . /etc/profile.d/modules.sh; \
  module unload openmpi-%{_arch};
  module unload mpi/openmpi-%{_arch};
</pre>
</pre>
loading and unloading the compiler in spec files is as easy as <code>%{_openmpi_load}</code> and <code>%{_openmpi_unload}</code>.
loading and unloading the compiler in spec files is as easy as <code>%{_openmpi_load}</code> and <code>%{_openmpi_unload}</code>.


If the environment module sets compiler flags such as <code>CFLAGS</code> (thus overriding the ones exported in <code>%configure</code>, the RPM macro <b>MUST</b> make them use the Fedora optimization flags <code>%{optflags}</code> once again (as in the example above in which the openmpi sets CFLAGS).
If the environment module sets compiler flags such as <code>CFLAGS</code> (thus overriding the ones exported in <code>%configure</code>, the RPM macro <b>MUST</b> make them use the Fedora optimization flags <code>%{optflags}</code> once again (as in the example above in which the openmpi-%{_arch} module sets CFLAGS).


== Packaging of MPI software ==
== Packaging of MPI software ==
Line 108: Line 128:
Software that supports MPI <b>MUST</b> be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: <code>foo</code>).
Software that supports MPI <b>MUST</b> be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: <code>foo</code>).


The packager <b>MUST</b> package at least a version compiled against Open MPI. Packages made against other MPI compilers in Fedora <b>SHOULD</b> be made, but that is left up to the maintainer. The MPI enabled bits <b>MUST</b> be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: <code>foo-mpi</code> for Open MPI [the traditional MPI compiler in Fedora] or <code>foo-mpich2</code> for MPICH2). For directory ownership and to guarantee the pickup of the correct MPI runtime, the MPI subpackages <b>MUST</b> require the correct MPI compiler's runtime package.
If possible, the packager <b>MUST</b> package versions for each MPI compiler in Fedora (e.g. if something can only be built with mpich2 and mvapich2, then lam and openmpi packages do not need to be made).
 
MPI implementation specific files <b>MUST</b> be installed in the directories used by the used MPI compiler (<code>$MPI_BIN</code>, <code>$MPI_LIB</code> and so on).
 
The binaries MUST be suffixed with <code>$MPI_SUFFIX</code> (e.g. _openmpi for Open MPI, _mpich2 for MPICH2 and _lam for LAM/MPI). This is for two reasons: the serial version of the program can still be run when an MPI module is loaded and the user is always aware of the version s/he is running. This does not need to hurt the use of shell scripts:
<pre>
# Which MPI implementation do we use?
 
#module load mpi/lam-i386
#module load mpi/openmpi-i386
module load mpi/mpich2-i386


Each MPI build of shared libraries <b>SHOULD</b> have a separate -libs subpackage for the libraries (e.g. foo-mpich2-libs). Each MPI build <b>MUST</b> have a separate -devel subpackage (e.g. foo-mpich2-devel) that includes the development libraries and Requires: <code>%{name}-devel</code> that includes the headers (e.g. you must be able to install foo-mpi-devel without needing to install mpich2 and lam).
# Run preprocessor
foo -preprocess < foo.in
# Run calculation
mpirun -np 4 foo${MPI_SUFFIX}
# Run some processing
mpirun -np 4 bar${MPI_SUFFIX} -process
# Collect results
bar -collect
</pre>


Files must be shared between packages as much as possible. Compiler independent parts, such as data files in <code>%{_datadir}/%{name}</code> <b>MUST</b> be put into a <code>-common</code> subpackage.
The MPI enabled bits <b>MUST</b> be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: <code>foo-openmpi</code> for Open MPI [the traditional MPI compiler in Fedora] or <code>foo-mpich2</code> for MPICH2). For directory ownership and to guarantee the pickup of the correct MPI runtime, the MPI subpackages <b>MUST</b> require the correct MPI compiler's runtime package.


MPI implementation specific files <b>MUST</b> be installed in the directories used by the used MPI compiler (<code>$MPI_BIN</code>, <code>$MPI_LIB</code> and so on). The binaries <b>MUST</b> be suffixed with <code>$MPI_SUFFIX</code>. As in the case of MPI compilers, no library configuration <b>MUST</b> be made.
Each MPI build of shared libraries <b>SHOULD</b> have a separate -libs subpackage for the libraries (e.g. foo-mpich2-libs). As in the case of MPI compilers, library configuration (in <code>/etc/ld.so.conf.d</code>) <b>MUST NOT</b> be made.
 
In case the headers are the same regardless of the compilation method and architecture (e.g. 32-bit serial, 64-bit Open MPI, MPICH2), they <b>MUST</b> be split into a separate <code>-headers</code> subpackage (e.g. 'foo-headers'). Fortran modules are architecture specific and as such are placed in the (MPI implementation specific) <code>-devel</code> package (foo-devel for the serial version and foo-openmpi-devel for the Open MPI version).
 
Each MPI build <b>MUST</b> have a separate -devel subpackage (e.g. foo-mpich2-devel) that includes the development libraries and <code>Requires: %{name}-headers</code> if such a package exists. The goal is to be able to install and develop using e.g. 'foo-mpi-devel' without needing to install e.g. mpich2 and lam or the serial version of the package.
 
Files must be shared between packages as much as possible. Compiler independent parts, such as data files in <code>%{_datadir}/%{name}</code> and man files <b>MUST</b> be put into a <code>-common</code> subpackage that is required by all of the binary packages (the serial package and all of the MPI packages).


=== A sample spec file ===
=== A sample spec file ===
<pre>
<pre>
# Define a macro for calling ../configure instead of ./configure
%global dconfigure %(printf %%s '%configure' | sed 's!\./configure!../configure!g')
Name: foo
Name: foo
Requires: %{name}-common = %{version}-%{release}
Requires: %{name}-common = %{version}-%{release}
Line 123: Line 170:
%package common
%package common


%package openmpi
%package lam
BuildRequires: lam-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: lam
Requires: %{name}-common = %{version}-%{release}
Requires: %{name}-common = %{version}-%{release}
%package mpi
BuildRequires: openmpi-devel
BuildRequires: openmpi-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: openmpi
Requires: %{name}-common = %{version}-%{release}


%package mpich2
%package mpich2
BuildRequires: mpich2-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: mpich2
Requires: %{name}-common = %{version}-%{release}
Requires: %{name}-common = %{version}-%{release}
BuildRequires: mpich2-devel


%build
%build
# Have to do off-root builds to be able to build many versions at once
# Have to do off-root builds to be able to build many versions at once


# Build serial version
# To avoid replicated code define a build macro
mkdir serial
%define dobuild() \
cd serial
mkdir $MPI_COMPILER; \
ln -s ../configure .
cd $MPI_COMPILER; \
%configure
%dconfigure --program-suffix=$MPI_SUFFIX ;\
make %{?_smp_mflags}
make %{?_smp_mflags} ; \
cd ..
cd ..
# Build serial version, dummy arguments
MPI_COMPILER=serial MPI_SUFFIX= %dobuild


# Build parallel versions: set compiler variables to MPI wrappers
# Build parallel versions: set compiler variables to MPI wrappers
Line 150: Line 210:
# Build LAM version
# Build LAM version
%{_lam_load}
%{_lam_load}
mkdir $MPI_COMPILER
%dobuild
cd $MPI_COMPILER
ln -s ../configure .
%configure --program-suffix=$MPI_SUFFIX
make %{?_smp_mflags}
cd ..
%{_lam_unload}
%{_lam_unload}


# Build OpenMPI version
# Build OpenMPI version
%{_openmpi_load}
%{_openmpi_load}
mkdir $MPI_COMPILER
%dobuild
cd $MPI_COMPILER
ln -s ../configure .
%configure --program-suffix=$MPI_SUFFIX
make %{?_smp_mflags}
cd ..
%{_openmpi_unload}
%{_openmpi_unload}


# Build mpich2 version
# Build mpich2 version
%{_mpich2_load}
%{_mpich2_load}
mkdir $MPI_COMPILER
%dobuild
cd $MPI_COMPILER
ln -s ../configure .
%configure --program-suffix=$MPI_SUFFIX
make %{?_smp_mflags}
cd ..
%{_mpich2_unload}
%{_mpich2_unload}



Latest revision as of 04:16, 26 October 2012

Introduction

Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 [1]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy.

There are many MPI implementations available, such as LAM-MPI (in Fedora, obsoleted by Open MPI), Open MPI (the default MPI compiler in Fedora and the MPI compiler used in RHEL), MPICH (Not yet in Fedora), MPICH2 (in Fedora) and MVAPICH1 and MVAPICH2 (are in RHEL but not yet in Fedora).

As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files.

Packaging of MPI compilers

The MPI compiler RPMs MUST be possible to build with other compilers as well and support simultaneous installation of versions compiled with different compilers (e.g. in addition to a version compiled with {gcc,g++,gfortran} a version compiled with {gcc34,g++34,g77} must be possible to install and use simultaneously as gfortran does not fully support Fortran 77). To do this, the files of MPI compilers MUST be installed in the following directories:

File type Placement
Binaries %{_libdir}/%{name}%{?_cc_name_suffix}/bin
Libraries %{_libdir}/%{name}%{?_cc_name_suffix}/lib
Fortran modules %{_fmoddir}/%{name}%{?_cc_name_suffix}/
Architecture specific Python modules %{python_sitearch}/%{name}%{?_cc_name_suffix}/
Config files %{_sysconfdir}/%{name}-%{_arch}%{?_cc_name_suffix}/

Here %{?_cc_name_suffix} is null when compiled with the normal {gcc,g++,gfortran} combination, but would be e.g. -gcc34 for {gcc34,g++34,g77}.


As include files and manual pages are bound to overlap between different MPI implementations, they MUST also placed outside normal directories. It is possible that some man pages or include files (either those of the MPI compiler itself or of some MPI software installed in the compiler's directory) are architecture specific (e.g. a definition on a 32-bit arch differs from that on a 64-bit arch), the directories that MUST be used are as follows:

File type Placement
Man pages %{_mandir}/%{name}-%{_arch}%{?_cc_name_suffix}/
Include files %{_includedir}/%{name}-%{_arch}%{?_cc_name_suffix}/


Architecture and compiler (%{?_cc_name_suffix}) independent parts (except headers which go into -devel) MUST be placed in a -common subpackage that is BuildArch: noarch on >= Fedora 11.


The MPI compiler's spec file MUST support the use of the following variables to compile with other compilers

# We only compile with gcc, but other people may want other compilers.
# Set the compiler here.
%global opt_cc gcc
# Optional CFLAGS to use with the specific compiler...gcc doesn't need any,
# so uncomment and define to use
#global opt_cflags
%global opt_cxx g++
#global opt_cxxflags
%global opt_f77 gfortran
#global opt_fflags
%global opt_fc gfortran
#global opt_fcflags

# Optional name suffix to use...we leave it off when compiling with gcc, but
# for other compiled versions to install side by side, it will need a
# suffix in order to keep the names from conflicting.
#global cc_name_suffix -gcc

The runtime of MPI compilers (mpirun, the libraries, the manuals etc) MUST be packaged into %{name}, and the development headers and libraries into %{name}-devel.

As the compiler is installed outside PATH, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using environment modules.

The module file MUST be installed under %{_sysconfdir}/modulefiles/mpi/. This allows as user with only one mpi implementation installed to load the module with:

module load mpi

The module file MUST have the line:

conflict mpi

to prevent concurrent loading of multiple mpi modules.


The module file MUST prepend the MPI bindir %{_bindir}/%{name}-%{_arch}%{?_opt_cc_suffix} into the users PATH, set LD_LIBRARY_PATH to %{_libdir}/%{name}%{?_opt_cc_suffix}/ and PYTHONPATH to %{python_sitearch}/%{name}%{?_cc_name_suffix}/. The module file MUST also set some helper variables (primarily for use in spec files):

Variable Value Explanation
MPI_BIN %{_libdir}/%{name}%{?_opt_cc_suffix}/bin Binaries compiled against the MPI stack
MPI_SYSCONFIG %{_sysconfdir}/%{name}-%{_arch}%{?opt_cc_suffix}/ MPI stack specific configuration files
MPI_FORTRAN_MOD_DIR %{_fmoddir}/%{name}%{?_opt_cc_suffix}/ MPI stack specific Fortran module directory
MPI_INCLUDE %{_includedir}/%{name}-%{_arch}%{?_opt_cc_suffix}/ MPI stack specific headers
MPI_LIB %{_libdir}/%{name}%{?_opt_cc_suffix}/lib Libraries compiled against the MPI stack
MPI_MAN %{_mandir}/%{name}-%{_arch}%{?_opt_cc_suffix}/ MPI stack specific man pages
MPI_PYTHON_SITEARCH %{python_sitearch}/%{name}%{?_cc_name_suffix}/ MPI stack specific Python modules
MPI_COMPILER %{name}-%{_arch}%{?_cc_name_suffix} Name of compiler package, for use in e.g. spec files
MPI_SUFFIX %{?_cc_name_suffix}_%{name} The suffix used for programs compiled against the MPI stack

As these directories may be used by software using the MPI stack, the MPI runtime package MUST own all of them.

MUST: By default, NO files are placed in /etc/ld.so.conf.d. If the packager wishes to provide alternatives support, it MUST be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.

MUST: If the maintainer wishes for the environment module to load automatically by use of a scriptlet in /etc/profile.d or by some other mechanism, this MUST be done in a subpackage.

MUST: The MPI compiler package MUST provide an RPM macro that makes loading and unloading the support easy in spec files, e.g. by placing the following in /etc/rpm/macros.openmpi

%_openmpi_load \
 . /etc/profile.d/modules.sh; \
 module load mpi/openmpi-%{_arch}; \
 export CFLAGS="$CFLAGS %{optflags}";
%_openmpi_unload \
 . /etc/profile.d/modules.sh; \
 module unload mpi/openmpi-%{_arch};

loading and unloading the compiler in spec files is as easy as %{_openmpi_load} and %{_openmpi_unload}.

If the environment module sets compiler flags such as CFLAGS (thus overriding the ones exported in %configure, the RPM macro MUST make them use the Fedora optimization flags %{optflags} once again (as in the example above in which the openmpi-%{_arch} module sets CFLAGS).

Packaging of MPI software

Software that supports MPI MUST be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: foo).

If possible, the packager MUST package versions for each MPI compiler in Fedora (e.g. if something can only be built with mpich2 and mvapich2, then lam and openmpi packages do not need to be made).

MPI implementation specific files MUST be installed in the directories used by the used MPI compiler ($MPI_BIN, $MPI_LIB and so on).

The binaries MUST be suffixed with $MPI_SUFFIX (e.g. _openmpi for Open MPI, _mpich2 for MPICH2 and _lam for LAM/MPI). This is for two reasons: the serial version of the program can still be run when an MPI module is loaded and the user is always aware of the version s/he is running. This does not need to hurt the use of shell scripts:

# Which MPI implementation do we use?

#module load mpi/lam-i386
#module load mpi/openmpi-i386
module load mpi/mpich2-i386

# Run preprocessor
foo -preprocess < foo.in
# Run calculation
mpirun -np 4 foo${MPI_SUFFIX}
# Run some processing
mpirun -np 4 bar${MPI_SUFFIX} -process
# Collect results
bar -collect

The MPI enabled bits MUST be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: foo-openmpi for Open MPI [the traditional MPI compiler in Fedora] or foo-mpich2 for MPICH2). For directory ownership and to guarantee the pickup of the correct MPI runtime, the MPI subpackages MUST require the correct MPI compiler's runtime package.

Each MPI build of shared libraries SHOULD have a separate -libs subpackage for the libraries (e.g. foo-mpich2-libs). As in the case of MPI compilers, library configuration (in /etc/ld.so.conf.d) MUST NOT be made.

In case the headers are the same regardless of the compilation method and architecture (e.g. 32-bit serial, 64-bit Open MPI, MPICH2), they MUST be split into a separate -headers subpackage (e.g. 'foo-headers'). Fortran modules are architecture specific and as such are placed in the (MPI implementation specific) -devel package (foo-devel for the serial version and foo-openmpi-devel for the Open MPI version).

Each MPI build MUST have a separate -devel subpackage (e.g. foo-mpich2-devel) that includes the development libraries and Requires: %{name}-headers if such a package exists. The goal is to be able to install and develop using e.g. 'foo-mpi-devel' without needing to install e.g. mpich2 and lam or the serial version of the package.

Files must be shared between packages as much as possible. Compiler independent parts, such as data files in %{_datadir}/%{name} and man files MUST be put into a -common subpackage that is required by all of the binary packages (the serial package and all of the MPI packages).

A sample spec file

# Define a macro for calling ../configure instead of ./configure
%global dconfigure %(printf %%s '%configure' | sed 's!\./configure!../configure!g')

Name: foo
Requires: %{name}-common = %{version}-%{release}

%package common

%package lam
BuildRequires: lam-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: lam
Requires: %{name}-common = %{version}-%{release}

%package mpi
BuildRequires: openmpi-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: openmpi
Requires: %{name}-common = %{version}-%{release}

%package mpich2
BuildRequires: mpich2-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: mpich2
Requires: %{name}-common = %{version}-%{release}

%build
# Have to do off-root builds to be able to build many versions at once

# To avoid replicated code define a build macro
%define dobuild() \
mkdir $MPI_COMPILER; \
cd $MPI_COMPILER; \
%dconfigure --program-suffix=$MPI_SUFFIX ;\
make %{?_smp_mflags} ; \
cd ..

# Build serial version, dummy arguments
MPI_COMPILER=serial MPI_SUFFIX= %dobuild

# Build parallel versions: set compiler variables to MPI wrappers
export CC=mpicc
export CXX=mpicxx
export FC=mpif90
export F77=mpif77

# Build LAM version
%{_lam_load}
%dobuild
%{_lam_unload}

# Build OpenMPI version
%{_openmpi_load}
%dobuild
%{_openmpi_unload}

# Build mpich2 version
%{_mpich2_load}
%dobuild
%{_mpich2_unload}

%install
# Install serial version
make -C serial install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"

# Install LAM version
%{_lam_load}
make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
%{_lam_unload}

# Install OpenMPI version
%{_openmpi_load}
make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
%{_openmpi_unload}

# Install MPICH2 version
%{_mpich2_load}
make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
%{_mpich2_unload}


%files # All the serial (normal) binaries

%files common # All files shared between the serial and different MPI versions

%files lam # All lam linked files

%files openmpi # All openmpi linked files

%files mpich2 # All mpich2 linked files