Allow users to set parallel jobs in config.yaml (#3812)

* Allow users to set parallel jobs in config.yaml

* Undo change from endash to emdash

* Remove parallel config, rename jobs to build_jobs
This commit is contained in:
Adam J. Stewart 2017-04-15 10:31:00 -05:00 committed by Todd Gamblin
parent 62fb1ad990
commit bd1beedaf5
6 changed files with 36 additions and 6 deletions

View file

@ -66,3 +66,9 @@ config:
# If set to true, `spack install` and friends will NOT clean # If set to true, `spack install` and friends will NOT clean
# potentially harmful variables from the build environment. Use wisely. # potentially harmful variables from the build environment. Use wisely.
dirty: false dirty: false
# The default number of jobs to use when running `make` in parallel.
# If set to 4, for example, `spack install` will run `make -j4`.
# If not set, all available cores are used by default.
# build_jobs: 4

View file

@ -99,8 +99,8 @@ See :ref:`modules` for details.
``build_stage`` ``build_stage``
-------------------- --------------------
Spack is designed to run out of a user home directories, and on many Spack is designed to run out of a user home directory, and on many
systems the home directory a (slow) network filesystem. On most systems, systems the home directory is a (slow) network filesystem. On most systems,
building in a temporary filesystem results in faster builds than building building in a temporary filesystem results in faster builds than building
in the home directory. Usually, there is also more space available in in the home directory. Usually, there is also more space available in
the temporary location than in the home directory. So, Spack tries to the temporary location than in the home directory. So, Spack tries to
@ -180,7 +180,24 @@ the way packages build. This includes ``LD_LIBRARY_PATH``, ``CPATH``,
``LIBRARY_PATH``, ``DYLD_LIBRARY_PATH``, and others. ``LIBRARY_PATH``, ``DYLD_LIBRARY_PATH``, and others.
By default, builds are ``clean``, but on some machines, compilers and By default, builds are ``clean``, but on some machines, compilers and
other tools may need custom ``LD_LIBRARY_PATH`` setings to run. You can other tools may need custom ``LD_LIBRARY_PATH`` settings to run. You can
set ``dirty`` to ``true`` to skip the cleaning step and make all builds set ``dirty`` to ``true`` to skip the cleaning step and make all builds
"dirty" by default. Be aware that this will reduce the reproducibility "dirty" by default. Be aware that this will reduce the reproducibility
of builds. of builds.
--------------
``build_jobs``
--------------
Unless overridden in a package or on the command line, Spack builds all
packages in parallel. For a build system that uses Makefiles, this means
running ``make -j<build_jobs>``, where ``build_jobs`` is the number of
threads to use.
The default parallelism is equal to the number of cores on your machine.
If you work on a shared login node or have a strict ulimit, it may be
necessary to set the default to a lower value. By setting ``build_jobs``
to 4, for example, commands like ``spack install`` will run ``make -j4``
instead of hogging every core.
To build all software in serial, set ``build_jobs`` to 1.

View file

@ -23,6 +23,7 @@
# License along with this program; if not, write to the Free Software # License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
############################################################################## ##############################################################################
import multiprocessing
import os import os
import sys import sys
import tempfile import tempfile
@ -141,6 +142,11 @@
dirty = _config.get('dirty', False) dirty = _config.get('dirty', False)
# The number of jobs to use when building in parallel.
# By default, use all cores on the machine.
build_jobs = _config.get('build_jobs', multiprocessing.cpu_count())
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# When packages call 'from spack import *', this extra stuff is brought in. # When packages call 'from spack import *', this extra stuff is brought in.
# #

View file

@ -357,7 +357,7 @@ def set_module_variables_for_package(pkg, module):
This makes things easier for package writers. This makes things easier for package writers.
""" """
# number of jobs spack will build with. # number of jobs spack will build with.
jobs = multiprocessing.cpu_count() jobs = spack.build_jobs
if not pkg.parallel: if not pkg.parallel:
jobs = 1 jobs = 1
elif pkg.make_jobs: elif pkg.make_jobs:

View file

@ -485,7 +485,7 @@ class SomePackage(Package):
parallel = True parallel = True
"""# jobs to use for parallel make. If set, overrides default of ncpus.""" """# jobs to use for parallel make. If set, overrides default of ncpus."""
make_jobs = None make_jobs = spack.build_jobs
"""By default do not run tests within package's install()""" """By default do not run tests within package's install()"""
run_tests = False run_tests = False

View file

@ -63,6 +63,7 @@
'verify_ssl': {'type': 'boolean'}, 'verify_ssl': {'type': 'boolean'},
'checksum': {'type': 'boolean'}, 'checksum': {'type': 'boolean'},
'dirty': {'type': 'boolean'}, 'dirty': {'type': 'boolean'},
'build_jobs': {'type': 'integer', 'minimum': 1},
} }
}, },
}, },